score
int64 50
2.08k
| text
stringlengths 698
618k
| url
stringlengths 16
846
| year
int64 13
24
|
---|---|---|---|
82 | Infinitesimals have been used to express the idea of objects so small that there is no way to see them or to measure them. The insight with exploiting infinitesimals was that objects could still retain certain specific properties, such as angle or slope, even though these objects were quantitatively small. The word infinitesimal comes from a 17th-century Modern Latin coinage infinitesimus, which originally referred to the "infinite-th" item in a series. It was originally introduced around 1670 by either Nicolaus Mercator or Gottfried Wilhelm Leibniz.
In common speech, an infinitesimal object is an object which is smaller than any feasible measurement, but not zero in size; or, so small that it cannot be distinguished from zero by any available means. Hence, when used as an adjective, "infinitesimal" in the vernacular means "extremely small". In order to give it a meaning it usually has to be compared to another infinitesimal object in the same context (as in a derivative). Infinitely many infinitesimals are summed to produce an integral.
Archimedes used what eventually came to be known as the Method of indivisibles in his work The Method of Mechanical Theorems to find areas of regions and volumes of solids. In his formal published treatises, Archimedes solved the same problem using the Method of Exhaustion. The 15th century saw the work of Nicholas of Cusa, further developed in the 17th century by Johannes Kepler, in particular calculation of area of a circle by representing the latter as an infinite-sided polygon. Simon Stevin's work on decimal representation of all numbers in the 16th century prepared the ground for the real continuum. Bonaventura Cavalieri's method of indivisibles led to an extension of the results of the classical authors. The method of indivisibles related to geometrical figures as being composed of entities of codimension 1. John Wallis's infinitesimals differed from indivisibles in that he would decompose geometrical figures into infinitely thin building blocks of the same dimension as the figure, preparing the ground for general methods of the integral calculus. He exploited an infinitesimal denoted in area calculations.
The use of infinitesimals by Leibniz relied upon heuristic principles, such as the Law of Continuity: what succeeds for the finite numbers succeeds also for the infinite numbers and vice versa; and the Transcendental Law of Homogeneity that specifies procedures for replacing expressions involving inassignable quantities, by expressions involving only assignable ones. The 18th century saw routine use of infinitesimals by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange. Augustin-Louis Cauchy exploited infinitesimals in defining continuity and an early form of a Dirac delta function. As Cantor and Dedekind were developing more abstract versions of Stevin's continuum, Paul du Bois-Reymond wrote a series of papers on infinitesimal-enriched continua based on growth rates of functions. Du Bois-Reymond's work inspired both Émile Borel and Thoralf Skolem. Borel explicitly linked du Bois-Reymond's work to Cauchy's work on rates of growth of infinitesimals. Skolem developed the first non-standard models of arithmetic in 1934. A mathematical implementation of both the law of continuity and infinitesimals was achieved by Abraham Robinson in 1961, who developed non-standard analysis based on earlier work by Edwin Hewitt in 1948 and Jerzy Łoś in 1955. The hyperreals implement an infinitesimal-enriched continuum and the transfer principle implements Leibniz's law of continuity. The standard part function implements Fermat's adequality.
History of the infinitesimal
The notion of infinitely small quantities was discussed by the Eleatic School. The Greek mathematician Archimedes (c.287 BC–c.212 BC), in The Method of Mechanical Theorems, was the first to propose a logically rigorous definition of infinitesimals. His Archimedean property defines a number x as infinite if it satisfies the conditions |x|>1, |x|>1+1, |x|>1+1+1, ..., and infinitesimal if x≠0 and a similar set of conditions holds for 1/x and the reciprocals of the positive integers. A number system is said to be Archimedean if it contains no infinite or infinitesimal members.
The Indian mathematician Bhāskara II (1114–1185) described a geometric technique for expressing the change in in terms of times a change in . Prior to the invention of calculus mathematicians were able to calculate tangent lines by the method Pierre de Fermat's method of adequality and René Descartes' method of normals. There is debate among scholars as to whether the method was infinitesimal or algebraic in nature. When Newton and Leibniz invented the calculus, they made use of infinitesimals. The use of infinitesimals was attacked as incorrect by Bishop Berkeley in his work The Analyst. Mathematicians, scientists, and engineers continued to use infinitesimals to produce correct results. In the second half of the nineteenth century, the calculus was reformulated by Augustin-Louis Cauchy, Bernard Bolzano, Karl Weierstrass, Cantor, Dedekind, and others using the (ε, δ)-definition of limit and set theory. While infinitesimals eventually disappeared from the calculus, their mathematical study continued through the work of Levi-Civita and others, throughout the late nineteenth and the twentieth centuries, as documented by Philip Ehrlich (2006). In the 20th century, it was found that infinitesimals could serve as a basis for calculus and analysis.
First-order properties
In extending the real numbers to include infinite and infinitesimal quantities, one typically wishes to be as conservative as possible by not changing any of their elementary properties. This guarantees that as many familiar results as possible will still be available. Typically elementary means that there is no quantification over sets, but only over elements. This limitation allows statements of the form "for any number x..." For example, the axiom that states "for any number x, x + 0 = x" would still apply. The same is true for quantification over several numbers, e.g., "for any numbers x and y, xy = yx." However, statements of the form "for any set S of numbers ..." may not carry over. This limitation on quantification is referred to as first-order logic.
It superficially seems clear that the resulting extended number system cannot agree with the reals on all properties that can be expressed by quantification over sets, because the goal is to construct a nonarchimedean system, and the Archimedean principle can be expressed by quantification over sets, but this is just plain wrong. It is trivial to conservatively extend any theory including reals, including set theory, to include infinitesimals, just by adding a countably infinite list of axioms that assert that a number is smaller than 1/2, 1/3, 1/4 and so on. Similarly, the completeness property cannot be expected to carry over, because the reals are the unique complete ordered field up to isomorphism. This is also wrong, at least as a formal statement, since it presuming some underlying model of set theory.
We can distinguish three levels at which a nonarchimedean number system could have first-order properties compatible with those of the reals:
- An ordered field obeys all the usual axioms of the real number system that can be stated in first-order logic. For example, the commutativity axiom x + y = y + x holds.
- A real closed field has all the first-order properties of the real number system, regardless of whether they are usually taken as axiomatic, for statements involving the basic ordered-field relations +, *, and ≤. This is a stronger condition than obeying the ordered-field axioms. More specifically, one includes additional first-order properties, such as the existence of a root for every odd-degree polynomial. For example, every number must have a cube root.
- The system could have all the first-order properties of the real number system for statements involving any relations (regardless of whether those relations can be expressed using +, *, and ≤). For example, there would have to be a sine function that is well defined for infinite inputs; the same is true for every real function.
Systems in category 1, at the weak end of the spectrum, are relatively easy to construct, but do not allow a full treatment of classical analysis using infinitesimals in the spirit of Newton and Leibniz. For example, the transcendental functions are defined in terms of infinite limiting processes, and therefore there is typically no way to define them in first-order logic. Increasing the analytic strength of the system by passing to categories 2 and 3, we find that the flavor of the treatment tends to become less constructive, and it becomes more difficult to say anything concrete about the hierarchical structure of infinities and infinitesimals.
Number systems that include infinitesimals
Formal series
Laurent series
An example from category 1 above is the field of Laurent series with a finite number of negative-power terms. For example, the Laurent series consisting only of the constant term 1 is identified with the real number 1, and the series with only the linear term x is thought of as the simplest infinitesimal, from which the other infinitesimals are constructed. Dictionary ordering is used, which is equivalent to considering higher powers of x as negligible compared to lower powers. David O. Tall refers to this system as the super-reals, not to be confused with the superreal number system of Dales and Woodin. Since a Taylor series evaluated with a Laurent series as its argument is still a Laurent series, the system can be used to do calculus on transcendental functions if they are analytic. These infinitesimals have different first-order properties than the reals because, for example, the basic infinitesimal x does not have a square root.
The Levi-Civita field
The Levi-Civita field is similar to the Laurent series, but is algebraically closed. For example, the basic infinitesimal x has a square root. This field is rich enough to allow a significant amount of analysis to be done, but its elements can still be represented on a computer in the same sense that real numbers can be represented in floating point.
where for purposes of ordering x is considered to be infinite.
Surreal numbers
Conway's surreal numbers fall into category 2. They are a system that was designed to be as rich as possible in different sizes of numbers, but not necessarily for convenience in doing analysis. Certain transcendental functions can be carried over to the surreals, including logarithms and exponentials, but most, e.g., the sine function, cannot. The existence of any particular surreal number, even one that has a direct counterpart in the reals, is not known a priori, and must be proved.
The most widespread technique for handling infinitesimals is the hyperreals, developed by Abraham Robinson in the 1960s. They fall into category 3 above, having been designed that way in order to allow all of classical analysis to be carried over from the reals. This property of being able to carry over all relations in a natural way is known as the transfer principle, proved by Jerzy Łoś in 1955. For example, the transcendental function sin has a natural counterpart *sin that takes a hyperreal input and gives a hyperreal output, and similarly the set of natural numbers has a natural counterpart , which contains both finite and infinite integers. A proposition such as carries over to the hyperreals as .
Smooth infinitesimal analysis
Synthetic differential geometry or smooth infinitesimal analysis have roots in category theory. This approach departs from the classical logic used in conventional mathematics by denying the general applicability of the law of excluded middle — i.e., not (a ≠ b) does not have to mean a = b. A nilsquare or nilpotent infinitesimal can then be defined. This is a number x where x2 = 0 is true, but x = 0 need not be true at the same time. Since the background logic is intuitionistic logic, it is not immediately clear how to classify this system with regard to classes 1, 2, and 3. Intuitionistic analogues of these classes would have to be developed first.
Infinitesimal delta functions
Cauchy used an infinitesimal to write down a unit impulse, infinitely tall and narrow Dirac-type delta function satisfying in a number of articles in 1827, see Laugwitz (1989). Cauchy defined an infinitesimal in 1821 (Cours d'Analyse) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.
Modern set-theoretic approaches allow one to define infinitesimals via the ultrapower construction, where a null sequence becomes an infinitesimal in the sense of an equivalence class modulo a relation defined in terms of a suitable ultrafilter. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals.
Logical properties
The method of constructing infinitesimals of the kind used in nonstandard analysis depends on the model and which collection of axioms are used. We consider here systems where infinitesimals can be shown to exist.
In 1936 Maltsev proved the compactness theorem. This theorem is fundamental for the existence of infinitesimals as it proves that it is possible to formalise them. A consequence of this theorem is that if there is a number system in which it is true that for any positive integer n there is a positive number x such that 0 < x < 1/n, then there exists an extension of that number system in which it is true that there exists a positive number x such that for any positive integer n we have 0 < x < 1/n. The possibility to switch "for any" and "there exists" is crucial. The first statement is true in the real numbers as given in ZFC set theory : for any positive integer n it is possible to find a real number between 1/n and zero, but this real number will depend on n. Here, one chooses n first, then one finds the corresponding x. In the second expression, the statement says that there is an x (at least one), chosen first, which is between 0 and 1/n for any n. In this case x is infinitesimal. This is not true in the real numbers (R) given by ZFC. Nonetheless, the theorem proves that there is a model (a number system) in which this will be true. The question is: what is this model? What are its properties? Is there only one such model?
- 1) Extend the number system so that it contains more numbers than the real numbers.
- 2) Extend the axioms (or extend the language) so that the distinction between the infinitesimals and non-infinitesimals can be made in the real numbers themselves.
In 1960, Abraham Robinson provided an answer following the first approach. The extended set is called the hyperreals and contains numbers less in absolute value than any positive real number. The method may be considered relatively complex but it does prove that infinitesimals exist in the universe of ZFC set theory. The real numbers are called standard numbers and the new non-real hyperreals are called nonstandard.
In 1977 Edward Nelson provided an answer following the second approach. The extended axioms are IST, which stands either for Internal Set Theory or for the initials of the three extra axioms: Idealization, Standardization, Transfer. In this system we consider that the language is extended in such a way that we can express facts about infinitesimals. The real numbers are either standard or nonstandard. An infinitesimal is a nonstandard real number which is less, in absolute value, than any positive standard real number.
In 2006 Karel Hrbacek developed an extension of Nelson's approach in which the real numbers are stratified in (infinitely) many levels i.e., in the coarsest level there are no infinitesimals nor unlimited numbers. Infinitesimals are in a finer level and there are also infinitesimals with respect to this new level and so on.
Infinitesimals in teaching
Calculus textbooks based on infinitesimals include the classic Calculus Made Easy by Silvanus P. Thompson, and bearing the motto "What one fool can do another can". Pioneering works based on Abraham Robinson's infinitesimals include texts by Stroyan (dating from 1972) and Howard Jerome Keisler (Elementary Calculus: An Infinitesimal Approach). Students easily relate to the intuitive notion of an infinitesimal difference 1-"0.999...", where "0.999..." differs from its standard meaning as the real number 1, and is reinterpreted as an infinite terminating extended decimal that is strictly less than 1.
See also
- *Katz, Mikhail; Sherry, David (2012), "Leibniz’s Infinitesimals: Their Fictionality, Their Modern Implementations, and Their Foes from Berkeley to Russell and Beyond", Erkenntnis, arXiv:1205.0174, doi:10.1007/s10670-012-9370-y.
- Netz, Reviel; Saito, Ken; Tchernetska, Natalie: A new reading of Method Proposition 14: preliminary evidence from the Archimedes palimpsest. I. SCIAMVS 2 (2001), 9–29.
- Archimedes, The Method of Mechanical Theorems; see Archimedes Palimpsest
- Shukla, Kripa Shankar (1984). "Use of Calculus in Hindu Mathematics". Indian Journal of History of Science 19: 95–104.
- George Berkeley, The Analyst; or a discourse addressed to an infidel mathematician
- "Infinitesimals in Modern Mathematics". Jonhoyle.com. Retrieved 2011-03-11.
- Khodr Shamseddine, "Analysis on the Levi-Civia Field: A Brief Overview," http://www.uwec.edu/surepam/media/RS-Overview.pdf
- G. A. Edgar, "Transseries for Beginners," http://www.math.ohio-state.edu/~edgar/preprints/trans_begin/
- Available online at http://www.gutenberg.org/ebooks/33283
- *Ely, Robert (2010). "Nonstandard student conceptions about infinitesimals". Journal for Research in Mathematics Education 41 (2): 117–146.
- B. Crowell, "Calculus" (2003)
- Ehrlich, P. (2006) The rise of non-Archimedean mathematics and the roots of a misconception. I. The emergence of non-Archimedean systems of magnitudes. Arch. Hist. Exact Sci. 60, no. 1, 1–121.
- J. Keisler, "Elementary Calculus" (2000) University of Wisconsin
- K. Stroyan "Foundations of Infinitesimal Calculus" (1993)
- Stroyan, K. D.; Luxemburg, W. A. J. Introduction to the theory of infinitesimals. Pure and Applied Mathematics, No. 72. Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1976.
- Robert Goldblatt (1998) "Lectures on the hyperreals" Springer.
- Cutland et al. "Nonstandard Methods and Applications in Mathematics" (2007) Lecture Notes in Logic 25, Association for Symbolic Logic.
- "The Strength of Nonstandard Analysis" (2007) Springer.
- Laugwitz, D. (1989). "Definite values of infinite sums: aspects of the foundations of infinitesimal analysis around 1820". Arch. Hist. Exact Sci. 39 (3): 195–245. doi:10.1007/BF00329867.
- Yamashita, H.: Comment on: "Pointwise analysis of scalar Fields: a nonstandard approach" [J. Math. Phys. 47 (2006), no. 9, 092301; 16 pp.]. J. Math. Phys. 48 (2007), no. 8, 084101, 1 page. | http://en.wikipedia.org/wiki/Infinitesimal | 13 |
78 | Assignment Sheet 1
Please note: Read carefully for the difference between lines, rays, and segments. Not all notation will appear correctly.
Historically, there have been several different approaches to doing geometry, not all of them axiomatic. In order to be able to do geometry, we need a common set of definitions and axioms. Definitions are important because all results depend on the definition used. It is often possible that more than one definition is acceptable. The same can be said of axioms. There are many possible sets of axioms that result in what is typically referred to as Euclidean geometry. I have chosen one particular set of axioms for this class, similar to the ones used by the mathematician Birkhoff, but this is certainly not the only choice. However, it is important that whenever you do any proofs in this class, you do not rely on results we have not assumed or proven. Therefore, you should carefully read all your proofs to be sure that you state the justification for each step.
As we have discussed, it is impossible to define every term in mathematics. The terms point, line, and plane will be undefined for us. Although we can discuss what we mean by these terms, these are the basic objects of study for us and we cannot define them in terms of other things.
Assumptions and postulates:
We assume the properties of the real numbers, of sets and set operations, and of algebra.
Postulate 1: Given any two different points, there is exactly one line that contains both of them. We often restate this as, “Two points determine a line.”
For the next postulate, we assume that we have picked a system of measurement.
Postulate 2 (Distance Postulate)/ Definition of Distance: To each pair of points there is a unique number. This number is called the distance between the two points. For two points P and Q, the distance between them will be written PQ.
Postulate 3 (Number Line Postulate): The points of a given line can be made to correspond to the real numbers in such a way that:
i. Every point of the line corresponds to exactly one real number
ii. Every real number corresponds to exactly one point on the line
iii. The distance between any two points is the absolute value of the difference between the corresponding numbers
Definitions: A coordinate system is a choice of correspondence between points and numbers as described in Postulate 3. The coordinate of a point is the number assigned via this correspondence.
Postulate 4 (Number Line Placement): Given two points on a line, say P and Q, the coordinate system can be chosen so that P is at 0 and the coordinate of Q has a positive value.
Definition: For three points, P, Q, and R, Q is said to be between P and R if PQ + QR = PR.
Any collection of points is said to be collinear if they all lie on the same line.
The absolute value of a real number x, written |x|, is given by: x when x > 0 or x = 0, and –x when x < 0.
The term space or 3-space will be undefined. Informally, we are using the term to mean three-dimensional space.
Definitions: Objects are said to be coplanar if they lie in the same plane.
A segment is a
set of two points together with all the points between them. For two points P and Q, the segment will be
PQ, and P and Q are called the endpoints of the segment.
The length of this segment is
the distance PQ.
A ray PQ, is the set of points P and Q together with all points R such that either R is between P and Q, or Q is between P and R. P is called the endpoint of the ray.
A point R is the midpoint
of a segment
PQ, if R is between P and Q and PR = QR. R, or any object which intersects PQ
at R, is said to bisect PQ.
Postulate 5.1: Every plane contains at least 3 non-collinear points.
Postulate 5.2: Space contains at least 4 non-coplanar points.
Postulate 6: If a plane contains two given points, it contains the line through the two points.
DUE THURSDAY, JANUARY 27:
These problems will be discussed over a period of several classes, but will be collected next class.
i. A square is a rectangle.
ii. A scalene triangle is a triangle with no two sides having a common length.
iii. An equilateral triangle is isosceles.
iv. A square is a rhombus.
FOR CAREFUL WRITE-UP:
Postulate 7: There is at least one plane containing any three given points. If the points are non-collinear, then there is only one such plane.
Postulate 8: If two different planes intersect, then their intersection is a line.
Definition: A set of points is called convex if, for any two points in the set, every point on the segment joining the points is contained in the set.
Postulate 9 (Plane Separation): Given a line and a plane containing it. The points of the plane that do not lie on the line form two sets such that each set is convex and, given two points, one in each set, the segment joining the points intersects the line.
Definitions: The two sets described in Postulate 9 are called half-planes, and the line is called the edge of each half-plane. The line is also said to separate the plane into two half-planes. Two points that lie in the same half-plane are said to lie on the same side of the line; if they are in different half-planes, they lie on opposite sides of the line.
Postulate 10 (Space Separation): [You will state this one yourself.]
Definitions: The two sets described in Postulate 10 are called half-spaces, and the plane is called a face of each half-space.
An angle is the union of two rays that have the same endpoint. The two rays are each called the sides of the angle, and their common endpoint is called the vertex of the angle.
For any three points, the union of the segments joining them is called a triangle, the segments are called the sides, and the three points are called the vertices of the triangle.
Postulate 11 (Angle Measurement): To every angle there corresponds a number greater than or equal to 0 and less than or equal to 180.
Definition: The measure of an angle is the number assigned through the correspondence in Postulate 11.
(Angle Construction): For any ray AB, such that the ray lies on the edge
of a half-plane, and for any number r, 0 < r < 180, there is exactly one
ray AC with the same endpoint A and
with C in the half-plane, with m<BAC=r.
For r=0 or 180, C will lie on the line
Postulate 13 (Angle Addition): Angle measure is additive for angles which share a common ray: For angles <BAC and <CAD, m<BAC + m<CAD = m<BAD.
Definitions: If two angles, <BAC and <CAD, share a common ray and B, A, and D are collinear, then the two angles form a linear pair. Two angles are supplementary if the sum of their measures is 180, and each angle is said to be a supplement of the other. If two angles form a linear pair and have the same measure, then each angle is a right angle. Two intersecting sets, each of which is a line, a segment, or a ray, are perpendicular if the angles formed by the intersection are right angles.
Two angles are complementary if the sum of their measures is 90, and each angle is said to be a complement of the other.
An angle with a measure less than 90 is called acute, while an angle with measure greater than 90 is called obtuse.
Two angles are congruent if their measures are equal. Two segments are congruent if they have the same length. Two triangles are congruent if there is a correspondence between the angles and segments of each triangle such that the corresponding angles and segments are congruent.
A ray AC is a bisector of angle <BAD if <BAC @ <CAD and the measures of these congruent angles are each not greater than 90.
A median of a triangle is a segment that has one endpoint at a vertex of the triangle and the other endpoint at the midpoint of the opposite side.
Postulate 14 (Supplement Postulate): If two angles form a linear pair, then they are supplementary.
Postulate 15 (SAS Postulate): If there is a correspondence between two triangles such that there are two sides and the angle included between the sides of the first triangle congruent to the corresponding parts of the second triangle, then the triangles are congruent.
Note: We may later take a different look at congruence using isometries of the plane in which we will consider alternative postulates for congruence.
DUE TUESDAY, FEBRUARY 1:
iv. 3 1/2
FOR CAREFUL WRITE-UP
Definition: In a given plane, the perpendicular bisector of a segment is the line that is perpendicular to the segment and intersects the segment at its midpoint.
DUE TUESDAY, FEBRUARY 8:
FOR CAREFUL WRITE-UP:
Definition: Two lines are parallel if they are coplanar and do not intersect, and are skew if they are not coplanar and do not intersect.
A transversal of two lines in a plane is a third line that intersects the two lines in two different points.
DUE TUESDAY, FEBRUARY 15:
FOR CAREFUL WRITE-UP
Euclidean geometry is named for the Greek mathematician
Euclid, and it is what we have focused on so far in this course. One important feature of Euclidean geometry
is known as
Postulate 16 (Parallel Postulate): Through a given point not on a given line, there is at most one line parallel to the given line. [We already proved that there is at least one such line.]
In this week’s problems, we will look at some alternative choices for Postulate 16.
Amazingly, it is not possible to prove the Parallel Postulate from the other 15 postulates and their consequences. This confounded mathematicians for a very long time. We will see later that in non-Euclidean geometry, it is possible to have the other postulates hold true, but to have more than one parallel through a point not on a given line, or to alter things so that there are no parallels, that is, so that lines always intersect.
DUE TUESDAY, FEBRUARY 22:
FOR CAREFUL WRITE-UP:
The quiz on March 1 will cover all the work up through the problems on this page. It will be open notes, and you may quote from any work we have done on the quiz.
Assignment sheet 2
We have, up to now, done Euclidean geometry without use of coordinates. We will now begin to look at geometry in the plane and in 3 dimensions with coordinates and we will think about congruence in a new way.
Definitions: A function T which is 1-1 and has the plane as both its domain and codomain is called a transformation. A transformation T is an isometry if it preserves distance. Two objects A and B in the plane will be said to be congruent (A is congruent to B) if there is an isometry T with T(A) = B. This will be referred to as the “new definition.”
The three most commonly discussed transformations are: reflection, rotation, and translation. We define them here:
A reflection T
across a line m is a transformation satisfying the condition that, for every
point P in the plane, setting P’ = T(P), we have
letting Q be the point of intersection of the lines, PQ=P’Q. A rotation
T around a point R by an angle e, e in radians, -π < e < π is a
transformation such that, for every point P in the plane, if P’=T(P), then m<P’RP = e and P’R = PR. Notice that in the preceding definition
angles have an orientation. A translation T by a directed segment AB is
a transformation such that, for any point C, if C’=T(C), then C’ completes a
parallelogram C’CAB, unless C’ is on the line AB,
in which case C’C = BA and C’B = CA.
With a new definition of congruence, we want to show that the same objects are congruent as before. To do this, we must show objects which were congruent under the new definition are congruent under the old definition, and that objects congruent under the old definition are congruent under the new definition. Even though we redefined congruence, as a result of 49-54, you may continue to use the old theorems about congruence to prove new results.
DUE TUESDAY, MARCH 1
As a result of the above, we know that objects congruent under the new definition are congruent under the old definition. We are not yet able to show that objects congruent under the old definition are congruent under the new definition. The reason for this is that, given two objects congruent under the old definition, we need to find an isometry that maps the first object onto the second. In order for this to happen, we need to know what maps are isometries.
FOR CAREFUL WRITE-UP:
We will begin to do some work using coordinates in the plane (and in three dimensions). We are going to assume that coordinates (x, y) have been established for the plane, and coordinates (x, y, z) have been established for 3 dimensions.
DUE TUESDAY, MARCH 8
FOR CAREFUL WRITE-UP:
IN-CLASS EXPLORATION FOR MARCH 10: What happens to the plane (or objects in the plane, if you prefer) if you perform two reflections? Put another way, what is the result of performing 2 reflections across lines l1 and l2 in the plane? There are three possible cases for the two lines of reflection: l1 and l2 are identical, they are parallel to each other, or they intersect.
DUE TUESDAY, MARCH 15
FOR CAREFUL WRITE-UP:
NOTE: The first exam covers problems 1-75. It will be open notes.
IN-CLASS EXPLORATION ON MARCH 24: We want to know what happens to the plane if you perform three reflections across 3 not necessarily distinct lines. Based on problem 59, you know how three lines can be arranged. For each of these possibilities, and considering different orders, you should come out with two different possible outcomes for the type of transformation you get: a glide reflection, which is a reflection followed by a translation (the most commonly cited example of a glide reflection is an idealized set of footprints, with two symmetric feet leaving tracks equidistant from a fixed line), or a reflection. Use the two problems below to summarize your answer.
The last case to consider is four reflections. It can be proven that the composition of four reflections is either a translation or a rotation. As a result of all this, it turns out that any number of reflections is equivalent to either a reflection, rotation, translation, or glide reflection. From here, there’s a short leap to prove the Theorem: Every distance-preserving transformation T is a composite of reflections. You do not need to prove the theorem. Thus the new definition of congruence is equivalent to the old definition. | http://www.csudh.edu/math/mjones/Math347/347AsnmtsS05.htm | 13 |
184 | In Euclidean geometry, a circle is the set of all points in a plane at a fixed distance, called the radius, from a given point, the center. The length of the circle is called its circumference, and any continuous portion of the circle is called an arc.
A circle is a simple closed curve that divides the plane into an interior and exterior. The interior of the circle is called a disk.
Mathematically, a circle can be understood in several other ways as well. For instance, it is a special case of an ellipse in which the two foci coincide (that is, they are the same point). Alternatively, a circle can be thought of as the conic section attained when a right circular cone is intersected with a plane perpendicular to the axis of the cone.
All circles have similar properties. Some of these are noted below.
- For any circle, the area enclosed and the square of its radius are in a fixed proportion, equal to the mathematical constant π.
- For any circle, the circumference and radius are in a fixed proportion, equal to 2π.
- The circle is the shape with the highest area for a given length of perimeter.
- The circle is a highly symmetrical shape. Every line through the center forms a line of reflection symmetry. In addition, there is rotational symmetry around the center for every angle. The symmetry group is called the orthogonal group O(2,R), and the group of rotations alone is called the circle group T.
- The circle centered at the origin with radius 1 is called the unit circle.
A line segment that connects one point of a circle to another is called a chord. The diameter is a chord that runs through the center of the circle.
- The diameter is longest chord of the circle.
- Chords equidistant from the center of a circle are equal in length. Conversely, chords that are equal in length are equidistant from the center.
- A line drawn through the center of a circle perpendicular to a chord bisects the chord. Alternatively, one can state that a line drawn through the center of a circle bisecting a chord is perpendicular to the chord. This line is called the perpendicular bisector of the chord. Thus, one could also state that the perpendicular bisector of a chord passes through the center of the circle.
- If a central angle and an inscribed angle of a circle are subtended by the same chord and on the same side of the chord, then the central angle is twice the inscribed angle.
- If two angles are inscribed on the same chord and on the same side of the chord, then they are equal.
- If two angles are inscribed on the same chord and on opposite sides of the chord, then they are supplemental.
- An inscribed angle subtended by a diameter is a right angle.
- The sagitta is a line segment drawn perpendicular to a chord, between the midpoint of that chord and the circumference of the circle.
- Given the length of a chord, y, and the length x of the sagitta, the Pythagorean theorem can be used to calculate the radius of the unique circle which will fit around the 2 lines :
- The line drawn perpendicular to the end point of a radius is a tangent to the circle.
- A line drawn perpendicular to a tangent at the point of contact with a circle passes through the center of the circle.
- Tangents drawn from a point outside the circle are equal in length.
- Two tangents can always be drawn from a point outside of the circle.
- The chord theorem states that if two chords, CD and EF, intersect at G, then . (Chord theorem)
- If a tangent from an external point D meets the circle at C and a secant from the external point D meets the circle at G and E respectively, then . (tangent-secant theorem)
- If two secants, DG and DE, also cut the circle at H and F respectively, then . (Corollary of the tangent-secant theorem)
- The angle between a tangent and chord is equal to the subtended angle on the opposite side of the chord. (Tangent chord property)
- If the angle subtended by the chord at the center is 90 degrees then l = √(2) × r, where l is the length of the chord and r is the radius of the circle.
- If two secants are inscribed in the circle as shown at right, then the measurement of angle A is equal to one half the difference of the measurements of the enclosed arcs (DE and BC). This is the secant-secant theorem.
Equation of a circle
In an x-y coordinate system, the circle with center (a, b) and radius r is the set of all points (x, y) such that
If the circle is centered at the origin (0, 0), then this formula can be simplified to
and its tangent will be
where x1, y1 are the coordinates of the common point.
When expressed in parametric equations, (x, y) can be written using the trigonometric functions sine and cosine as
where t is a parametric variable, understood as the angle the ray to (x, y) makes with the x-axis.
In homogeneous coordinates each conic section with equation of a circle is
- ax2 + ay2 + 2b1xz + 2b2yz + cz2 = 0.
It can be proven that a conic section is a circle if and only if the point I(1,i,0) and J(1,-i,0) lie on the conic section. These points are called the circular points at infinity.
In polar coordinates the equation of a circle is
In the complex plane, a circle with a center at c and radius r has the equation | z − c | 2 = r2. Since , the slightly generalized equation for real p, q and complex g is sometimes called a generalized circle. It is important to note that not all generalized circles are actually circles.
The slope of a circle at a point (x, y) can be expressed with the following formula, assuming the center is at the origin and (x, y) is on the circle:
More generally, the slope at a point (x, y) on the circle (x − a)2 + (y − b)2 = r2, (i.e., the circle centered at [a, b] with radius r units), is given by
provided that , of course.
- The area enclosed by a circle is
that is, approximately 79 percent of the circumscribed square.
- Length of a circle's circumference is
- Alternate formula for circumference:
Given that the ratio circumference c to the Area A is
The r and the π can be canceled, leaving
Therefore solving for c:
So the circumference is equal to 2 times the area, divided by the radius. This can be used to calculate the circumference when a value for π cannot be computed.
The diameter of a circle is
An inscribed angle ψ is exactly half of the corresponding central angle θ (see Figure). Hence, all inscribed angles that subtend the same arc have the same value (cf. the blue and green angles ψ in the Figure). Angles inscribed on the arc are supplementary. In particular, every inscribed angle that subtends a diameter is a right angle.
An alternative definition of a circle
Apollonius of Perga showed that a circle may also be defined as the set of points having a constant ratio of distances to two foci, A and B.
The proof is as follows. A line segment PC bisects the interior angle APB, since the segments are similar:
Analogously, a line segment PD bisects the corresponding exterior angle. Since the interior and exterior angles sum to , the angle CPD is exactly , i.e., a right angle. The set of points P that form a right angle with a given line segment CD form a circle, of which CD is the diameter.
As a point of clarification, note that C and D are determined by A, B, and the desired ratio (i.e. A and B are not arbitrary points lying on an extension of the diameter of an existing circle).
Calculating the parameters of a circle
Given three non-collinear points lying on the circle
The radius of the circle is given by
The center of the circle is given by
Plane unit normal
A unit normal of the plane containing the circle is given by
Given the radius, r , center, Pc, a point on the circle, P0 and a unit normal of the plane containing the circle, , the parametric equation of the circle starting from the point P0 and proceeding counterclockwise is given by the following equation:
- Altshiller-Court, Nathan. 2007. College Geometry: An Introduction to the Modern Geometry of the Triangle and the Circle. New York, NY: Dover Publications. ISBN 0486458059.
- Arnone, Wendy. 2001. Geometry for Dummies. Hoboken, NJ: For Dummies (Wiley). ISBN 0764553240.
- Pedoe, Dan. 1997. Circles: A Mathematical View. Washington, DC: The Mathematical Association of America. ISBN 0883855186.
All links retrieved May 22, 2013.
- Interactive Java applets – for the properties of and elementary constructions involving circles.
- Interactive Standard Form Equation of Circle – Click and drag points to see standard form equation in action.
- Clifford's Circle Chain Theorems. – Step by step presentation of the first theorem. Clifford discovered, in the ordinary Euclidean plane, a "sequence or chain of theorems" of increasing complexity, each building on the last in a natural progression by Antonio Gutierrez from "Geometry Step by Step from the Land of the Incas".
- What Is Circle? – at cut-the-knot.
- Ron Blond homepage - interactive applets.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Circle | 13 |
88 | Global Positioning System Overview
GPS is a Satellite Navigation System
- GPS is funded by and controlled by the U. S. Department of Defense (DOD). While there are many thousands of civil users of GPS world-wide, the system was designed for and is operated by the U. S. military.
- GPS provides specially coded satellite signals that can be processed in a GPS receiver, enabling the receiver to compute position, velocity and time.
- Four GPS satellite signals are used to compute positions in three dimensions and the time offset in the receiver clock.
- The Space Segment of the system consists of the GPS satellites. These space vehicles (SVs) send radio signals from space.
- The nominal GPS Operational Constellation consists of 24 satellites that orbit the earth in 12 hours. There are often more than 24 operational satellites as new ones are launched to replace older satellites. The satellite orbits repeat almost the same ground track (as the earth turns beneath them) once each day. The orbit altitude is such that the satellites repeat the same track and configuration over any point approximately each 24 hours (4 minutes earlier each day). There are six orbital planes (with nominally four SVs in each), equally spaced (60 degrees apart), and inclined at about fifty-five degrees with respect to the equatorial plane. This constellation provides the user with between five and eight SVs visible from any point on the earth.
- The Control Segment consists of a system of tracking stations located around the world.
- The Master Control facility is located at Schriever Air Force Base (formerly Falcon AFB) in Colorado. These monitor stations measure signals from the SVs which are incorporated into orbital models for each satellites. The models compute precise orbital data (ephemeris) and SV clock corrections for each satellite. The Master Control station uploads ephemeris and clock data to the SVs. The SVs then send subsets of the orbital ephemeris data to GPS receivers over radio signals.
The GPS User Segment consists of the GPS receivers and the user community. GPS receivers convert SV signals into position, velocity, and time estimates. Four satellites are required to compute the four dimensions of X, Y, Z (position) and Time. GPS receivers are used for navigation, positioning, time dissemination, and other research.
- Navigation in three dimensions is the primary function of GPS. Navigation receivers are made for aircraft, ships, ground vehicles, and for hand carrying by individuals.
- Precise positioning is possible using GPS receivers at reference locations providing corrections and relative positioning data for remote receivers. Surveying, geodetic control, and plate tectonic studies are examples.
- Time and frequency dissemination, based on the precise clocks on board the SVs and controlled by the monitor stations, is another use for GPS. Astronomical observatories, telecommunications facilities, and laboratory standards can be set to precise time signals or controlled to accurate frequencies by special purpose GPS receivers.
- Research projects have used GPS signals to measure atmospheric parameters.
GPS Positioning Services Specified In The Federal Radionavigation Plan
Precise Positioning Service (PPS)
- Authorized users with cryptographic equipment and keys and specially equipped receivers use the Precise Positioning System. U. S. and Allied military, certain U. S. Government agencies, and selected civil users specifically approved by the U. S. Government, can use the PPS.
PPS Predictable Accuracy
- 22 meter Horizontal accuracy
- 27.7 meter vertical accuracy
- 200 nanosecond time (UTC) accuracy
Standard Positioning Service (SPS)
- Civil users worldwide use the SPS without charge or restrictions. Most receivers are capable of receiving and using the SPS signal. The SPS accuracy is intentionally degraded by the DOD by the use of Selective Availability.
SPS Predictable Accuracy
- 100 meter horizontal accuracy
- 156 meter vertical accuracy
- 340 nanoseconds time accuracy
- These GPS accuracy figures are from the 1999 Federal Radionavigation Plan. The figures are 95% accuracies, and express the value of two standard deviations of radial error from the actual antenna position to an ensemble of position estimates made under specified satellite elevation angle (five degrees) and PDOP (less than six) conditions.
- For horizontal accuracy figures 95% is the equivalent of 2drms (two-distance root-mean-squared), or twice the radial error standard deviation. For vertical and time errors 95% is the value of two-standard deviations of vertical error or time error.
- Receiver manufacturers may use other accuracy measures. Root-mean-square (RMS) error is the value of one standard deviation (68%) of the error in one, two or three dimensions. Circular Error Probable (CEP) is the value of the radius of a circle, centered at the actual position that contains 50% of the position estimates. Spherical Error Probable (SEP) is the spherical equivalent of CEP, that is the radius of a sphere, centered at the actual position, that contains 50% of the three dimension position estimates. As opposed to 2drms, drms, or RMS figures, CEP and SEP are not affected by large blunder errors making them an overly optimistic accuracy measure
- Some receiver specification sheets list horizontal accuracy in RMS or CEP and without Selective Availability, making those receivers appear more accurate than those specified by more responsible vendors using more conservative error measures.
GPS Satellite Signals
- The SVs transmit two microwave carrier signals. The L1 frequency (1575.42 MHz) carries the navigation message and the SPS code signals. The L2 frequency (1227.60 MHz) is used to measure the ionospheric delay by PPS equipped receivers.
Three binary codes shift the L1 and/or L2 carrier phase.
- The C/A Code (Coarse Acquisition) modulates the L1 carrier phase. The C/A code is a repeating 1 MHz Pseudo Random Noise (PRN) Code. This noise-like code modulates the L1 carrier signal, "spreading" the spectrum over a 1 MHz bandwidth. The C/A code repeats every 1023 bits (one millisecond). There is a different C/A code PRN for each SV. GPS satellites are often identified by their PRN number, the unique identifier for each pseudo-random-noise code. The C/A code that modulates the L1 carrier is the basis for the civil SPS.
- The P-Code (Precise) modulates both the L1 and L2 carrier phases. The P-Code is a very long (seven days) 10 MHz PRN code. In the Anti-Spoofing (AS) mode of operation, the P-Code is encrypted into the Y-Code. The encrypted Y-Code requires a classified AS Module for each receiver channel and is for use only by authorized users with cryptographic keys. The P (Y)-Code is the basis for the PPS.
- The Navigation Message also modulates the L1-C/A code signal. The Navigation Message is a 50 Hz signal consisting of data bits that describe the GPS satellite orbits, clock corrections, and other system parameters.
- The GPS Navigation Message consists of time-tagged data bits marking the time of transmission of each subframe at the time they are transmitted by the SV. A data bit frame consists of 1500 bits divided into five 300-bit subframes. A data frame is transmitted every thirty seconds. Three six-second subframes contain orbital and clock data. SV Clock corrections are sent in subframe one and precise SV orbital data sets (ephemeris data parameters) for the transmitting SV are sent in subframes two and three. Subframes four and five are used to transmit different pages of system data. An entire set of twenty-five frames (125 subframes) makes up the complete Navigation Message that is sent over a 12.5 minute period.
- Data frames (1500 bits) are sent every thirty seconds. Each frame consists of five subframes.
- Data bit subframes (300 bits transmitted over six seconds) contain parity bits that allow for data checking and limited error correction.
- Clock data parameters describe the SV clock and its relationship to GPS time.
- Ephemeris data parameters describe SV orbits for short sections of the satellite orbits. Normally, a receiver gathers new ephemeris data each hour, but can use old data for up to four hours without much error. The ephemeris parameters are used with an algorithm that computes the SV position for any time within the period of the orbit described by the ephemeris parameter set.
- Almanacs are approximate orbital data parameters for all SVs. The ten-parameter almanacs describe SV orbits over extended periods of time (useful for months in some cases) and a set for all SVs is sent by each SV over a period of 12.5 minutes (at least). Signal acquisition time on receiver start-up can be significantly aided by the availability of current almanacs. The approximate orbital data is used to preset the receiver with the approximate position and carrier Doppler frequency (the frequency shift caused by the rate of change in range to the moving SV) of each SV in the constellation.
- Each complete SV data set includes an ionospheric model that is used in the receiver to approximates the phase delay through the ionosphere at any location and time.
- Each SV sends the amount to which GPS Time is offset from Universal Coordinated Time. This correction can be used by the receiver to set UTC to within 100 ns.
- Other system parameters and flags are sent that characterize details of the system.
Position, and Time from GPS
Code Phase Tracking (Navigation)
- The GPS receiver produces replicas of the C/A and/or P (Y)-Code. Each PRN code is a noise-like, but pre-determined, unique series of bits.
- The receiver produces the C/A code sequence for a specific SV with some form of a C/A code generator. Modern receivers usually store a complete set of precomputed C/A code chips in memory, but a hardware, shift register, implementation can also be used.
- The C/A code generator produces a different 1023 chip sequence for each phase tap setting. In a shift register implementation the code chips are shifted in time by slewing the clock that controls the shift registers. In a memory lookup scheme the required code chips are retrieved from memory.
- The C/A code generator repeats the same 1023-chip PRN-code sequence every millisecond. PRN codes are defined for 32 satellite identification numbers.
- The receiver slides a replica of the code in time until there is correlation with the SV code.
- If the receiver applies a different PRN code to an SV signal there is no correlation.
- When the receiver uses the same code as the SV and the codes begin to line up, some signal power is detected.
- As the SV and receiver codes line up completely, the spread-spectrum carrier signal is de-spread and full signal power is detected.
- A GPS receiver uses the detected signal power in the correlated signal to align the C/A code in the receiver with the code in the SV signal. Usually a late version of the code is compared with an early version to insure that the correlation peak is tracked.
- A phase locked loop that can lock to either a positive or negative half-cycle (a bi-phase lock loop) is used to demodulate the 50 HZ navigation message from the GPS carrier signal. The same loop can be used to measure and track the carrier frequency (Doppler shift) and by keeping track of the changes to the numerically controlled oscillator, carrier frequency phase can be tracked and measured.
- The receiver PRN code start position at the time of full correlation is the time of arrival (TOA) of the SV PRN at receiver. This TOA is a measure of the range to SV offset by the amount to which the receiver clock is offset from GPS time. This TOA is called the pseudo-range.
- The position of the receiver is where the pseudo-ranges from a set of SVs intersect.
- Position is determined from multiple pseudo-range measurements at a single measurement epoch. The pseudo range measurements are used together with SV position estimates based on the precise orbital elements (the ephemeris data) sent by each SV. This orbital data allows the receiver to compute the SV positions in three dimensions at the instant that they sent their respective signals.
- Four satellites (normal navigation) can be used to determine three position dimensions and time. Position dimensions are computed by the receiver in Earth-Centered, Earth-Fixed X, Y, Z (ECEF XYZ) coordinates.
- Time is used to correct the offset in the receiver clock, allowing the use of an inexpensive receiver clock.
- SV Position in XYZ is computed from four SV pseudo-ranges and the clock correction and ephemeris data.
- Receiver position is computed from the SV positions, the measured pseudo-ranges (corrected for SV clock offsets, ionospheric delays, and relativistic effects), and a receiver position estimate (usually the last computed receiver position).
- Three satellites could be used determine three position dimensions with a perfect receiver clock. In practice this is rarely possible and three SVs are used to compute a two-dimensional, horizontal fix (in latitude and longitude) given an assumed height. This is often possible at sea or in altimeter equipped aircraft.
- Five or more satellites can provide position, time and redundancy. More SVs can provide extra position fix certainty and can allow detection of out-of-tolerance signals under certain circumstances.
Receiver Position, Velocity, and Time
- Position in XYZ is converted within the receiver to geodetic latitude, longitude and height above the ellipsoid.
- Latitude and longitude are usually provided in the geodetic datum on which GPS is based (WGS-84). Receivers can often be set to convert to other user-required datums. Position offsets of hundreds of meters can result from using the wrong datum.
- Velocity is computed from change in position over time, the SV Doppler frequencies, or both.
- Time is computed in SV Time, GPS Time, and UTC.
- SV Time is the time maintained by each satellite. Each SV contains four atomic clocks (two cesium and two rubidium). SV clocks are monitored by ground control stations and occasionally reset to maintain time to within one-millisecond of GPS time. Clock correction data bits reflect the offset of each SV from GPS time.
- SV Time is set in the receiver from the GPS signals. Data bit subframes occur every six seconds and contain bits that resolve the Time of Week to within six seconds. The 50 Hz data bit stream is aligned with the C/A code transitions so that the arrival time of a data bit edge (on a 20 millisecond interval) resolves the pseudo-range to the nearest millisecond. Approximate range to the SV resolves the twenty millisecond ambiguity, and the C/A code measurement represents time to fractional milliseconds. Multiple SVs and a navigation solution (or a known position for a timing receiver) permit SV Time to be set to an accuracy limited by the position error and the pseudo-range error for each SV.
- SV Time is converted to GPS Time in the receiver.
- GPS Time is a "paper clock" ensemble of the Master Control Clock and the SV clocks. GPS Time is measured in weeks and seconds from 24:00:00, January 5, 1980 and is steered to within one microsecond of UTC. GPS Time has no leap seconds and is ahead of UTC by several seconds.
- Time in Universal Coordinated Time (UTC) is computed from GPS Time using the UTC correction parameters sent as part of the navigation data bits.
- At the transition between 23:59:59 UTC on December 31, 1998 and 00:00:00 UTC on January 1, 1999, UTC was retarded by one-second. GPS Time is now ahead of UTC by 13 seconds.
Carrier Phase Tracking (Surveying)
- Carrier-phase tracking of GPS signals has resulted in a revolution in land surveying. A line of sight along the ground is no longer necessary for precise positioning. Positions can be measured up to 30 km from reference point without intermediate points. This use of GPS requires specially equipped carrier tracking receivers.
- The L1 and/or L2 carrier signals are used in carrier phase surveying. L1 carrier cycles have a wavelength of 19 centimeters. If tracked and measured these carrier signals can provide ranging measurements with relative accuracies of millimeters under special circumstances.
- Tracking carrier phase signals provides no time of transmission information. The carrier signals, while modulated with time tagged binary codes, carry no time-tags that distinguish one cycle from another. The measurements used in carrier phase tracking are differences in carrier phase cycles and fractions of cycles over time. At least two receivers track carrier signals at the same time. Ionospheric delay differences at the two receivers must be small enough to insure that carrier phase cycles are properly accounted for. This usually requires that the two receivers be within about 30 km of each other.
- Carrier phase is tracked at both receivers and the changes in tracked phase are recorded over time in both receivers.
- All carrier-phase tracking is differential, requiring both a reference and remote receiver tracking carrier phases at the same time.
- Unless the reference and remote receivers use L1-L2 differences to measure the ionospheric delay, they must be close enough to insure that the ionospheric delay difference is less than a carrier wavelength.
- Using L1-L2 ionospheric measurements and long measurement averaging periods, relative positions of fixed sites can be determined over baselines of hundreds of kilometers.
- Phase difference changes in the two receivers are reduced using software to differences in three position dimensions between the reference station and the remote receiver. High accuracy range difference measurements with sub-centimeter accuracy are possible. Problems result from the difficulty of tracking carrier signals in noise or while the receiver moves.
- Two receivers and one SV over time result in single differences.
- Two receivers and two SVs over time provide double differences.
- Post processed static carrier-phase surveying can provide 1-5 cm relative positioning within 30 km of the reference receiver with measurement time of 15 minutes for short baselines (10 km) and one hour for long baselines (30 km).
- Rapid static or fast static surveying can provide 4-10 cm accuracies with 1 kilometer baselines and 15 minutes of recording time.
- Real-Time-Kinematic (RTK) surveying techniques can provide centimeter measurements in real time over 10 km baselines tracking five or more satellites and real-time radio links between the reference and remote receivers.
GPS Error Sources
- GPS errors are a combination of noise, bias, blunders.
- Noise errors are the combined effect of PRN code noise (around 1 meter) and noise within the receiver noise (around 1 meter).
Bias errors result from Selective Availability and other factors
Selective Availability (SA)
- SA is the intentional degradation of the SPS signals by a time varying bias. SA is controlled by the DOD to limit accuracy for non-U. S. military and government users. The potential accuracy of the C/A code of around 30 meters is reduced to 100 meters (two standard deviations).
- The SA bias on each satellite signal is different, and so the resulting position solution is a function of the combined SA bias from each SV used in the navigation solution. Because SA is a changing bias with low frequency terms in excess of a few hours, position solutions or individual SV pseudo-ranges cannot be effectively averaged over periods shorter than a few hours. Differential corrections must be updated at a rate less than the correlation time of SA (and other bias errors).
Other Bias Error sources;
- SV clock errors uncorrected by Control Segment can result in one meter errors.
- Ephemeris data errors: 1 meter
- Tropospheric delays: 1 meter. The troposphere is the lower part (ground level to from 8 to 13 km) of the atmosphere that experiences the changes in temperature, pressure, and humidity associated with weather changes. Complex models of tropospheric delay require estimates or measurements of these parameters.
- Unmodeled ionosphere delays: 10 meters. The ionosphere is the layer of the atmosphere from 50 to 500 km that consists of ionized air. The transmitted model can only remove about half of the possible 70 ns of delay leaving a ten meter un-modeled residual.
- Multipath: 0.5 meters. Multipath is caused by reflected signals from surfaces near the receiver that can either interfere with or be mistaken for the signal that follows the straight line path from the satellite. Multipath is difficult to detect and sometime hard to avoid.
- Selective Availability (SA)
Blunders can result in errors of hundred of kilometers.
- Control segment mistakes due to computer or human error can cause errors from one meter to hundreds of kilometers.
- User mistakes, including incorrect geodetic datum selection, can cause errors from 1 to hundreds of meters.
- Receiver errors from software or hardware failures can cause blunder errors of any size.
- Noise and bias errors combine, resulting in typical ranging errors of around fifteen meters for each satellite used in the position solution.
Geometric Dilution of Precision (GDOP) and Visibility
GPS ranging errors are magnified by the range vector differences between the receiver and the SVs. The volume of the shape described by the unit-vectors from the receiver to the SVs used in a position fix is inversely proportional to GDOP.
- Poor GDOP, a large value representing a small unit vector-volume, results when angles from receiver to the set of SVs used are similar.
- GPS ranging errors are magnified by the range vector differences between the receiver and the SVs. The volume of the shape described by the unit-vectors from the receiver to the SVs used in a position fix is inversely proportional to GDOP.
Good GDOP, a small value representing a large unit-vector-volume, results when angles from receiver to SVs are different.
- GDOP is computed from the geometric relationships between the receiver position and the positions of the satellites the receiver is using for navigation. For planning purposes GDOP is often computed from Almanacs and an estimated position. Estimated GDOP does not take into account obstacles that block the line-of-sight from the position to the satellites. Estimated GDOP may not be realizable in the field.
- GDOP terms are usually computed using parameters from the navigation solution process.
- In general, ranging errors from the SV signals are multiplied by the appropriate GDOP term to estimate the resulting position or time error. Various GDOP terms can be computed from the navigation covariance matrix. ECEF XYZ DOP terms can be rotated into a North-East Down (NED) system to produce local horizontal and vertical DOP terms.
- PDOP = Position Dilution of Precision (3-D), sometimes the Spherical DOP.
- HDOP = Horizontal Dilution of Precision (Latitude, Longitude).
- VDOP = Vertical Dilution of Precision (Height).
- TDOP = Time Dilution of Precision (Time).
- While each of these GDOP terms can be individually computed, they are formed from covariances and so are not independent of each other. A high TDOP (time dilution of precision), for example, will cause receiver clock errors which will eventually result in increased position errors.
Differential GPS (DGPS) Techniques
- The idea behind all differential positioning is to correct bias errors at one location with measured bias errors at a known position. A reference receiver, or base station, computes corrections for each satellite signal.
- Because individual pseudo-ranges must be corrected prior to the formation of a navigation solution, DGPS implementations require software in the reference receiver that can track all SVs in view and form individual pseudo-range corrections for each SV. These corrections are passed to the remote, or rover, receiver which must be capable of applying these individual pseudo-range corrections to each SV used in the navigation solution. Applying a simple position correction from the reference receiver to the remote receiver has limited effect at useful ranges because both receivers would have to be using the same set of SVs in their navigation solutions and have identical GDOP terms (not possible at different locations) to be identically affected by bias errors.
Differential Code GPS (Navigation)
Differential corrections may be used in real-time or later, with post-processing techniques.
- Real-time corrections can be transmitted by radio link. The U. S. Coast Guard maintains a network of differential monitors and transmits DGPS corrections over radiobeacons covering much of the U. S. coastline. DGPS corrections are often transmitted in a standard format specified by the Radio Technical Commission Marine (RTCM).
- Corrections can be recorded for post processing. Many public and private agencies record DGPS corrections for distribution by electronic means.
- Private DGPS services use leased FM sub-carrier broadcasts, satellite links, or private radio-beacons for real-time applications.
- To remove Selective Availability (and other bias errors), differential corrections should be computed at the reference station and applied at the remote receiver at an update rate that is less than the correlation time of SA. Suggested DGPS update rates are usually less than twenty seconds.
- DGPS removes common-mode errors, those errors common to both the reference and remote receivers (not multipath or receiver noise). Errors are more often common when receivers are close together (less than 100 km). Differential position accuracies of 1-10 meters are possible with DGPS based on C/A code SPS signals.
- Differential corrections may be used in real-time or later, with post-processing techniques.
Differential Carrier GPS (Survey)
- All carrier-phase tracking is differential, requiring both a reference and remote receiver tracking carrier phases at the same time.
- In order to correctly estimate the number of carrier wavelengths at the reference and remote receivers, they must be close enough to insure that the ionospheric delay difference is less than a carrier wavelength. This usually means that carrier-phase GPS measurements must be taken with a remote and reference station within about 30 kilometers of each other.
- Special software is required to process carrier-phase differential measurements. Newer techniques such as Real-Time-Kinematic (RTK) processing allow for centimeter relative positioning with a moving remote receiver.
Common Mode Time Transfer
- When time information is transferred from one site to another, differential techniques can result in time transfers of around 10 ns over baselines as long as 2000 km.
GPS Techniques and Project Costs
- Receiver costs vary depending on capabilities. Small civil SPS receivers can be purchased for under $200, some can accept differential corrections. Receivers that can store files for post-procesing with base station files cost more ($2000-5000). Receivers that can act as DGPS reference receivers (computing and providing correction data) and carrier phase tracking receivers (and two are often required) can cost many thousands of dollars ($5,000 to $40,000). Military PPS receivers may cost more or be difficult to obtain.
- Other costs include the cost of multiple receivers when needed, post-processing software, and the cost of specially trained personnel.
Project tasks can often be categorized by required accuracies which will determine equipment cost.
- Low-cost, single-receiver SPS projects (100 meter accuracy)
- Medium-cost, differential SPS code Positioning (1-10 meter accuracy)
- High-cost, single-receiver PPS projects (20 meter accuracy)
- High-cost, differential carrier phase surveys (1 mm to 1 cm accuracy)
By Peter H. Dana
Publicado em 10/11/2011 | http://www.aprendelo.com.br/rec/global-positioning-system-overview.html | 13 |
78 | In physics, jerk, also known as jolt, surge, or lurch, is the rate of change of acceleration; that is, the derivative of acceleration with respect to time, the second derivative of velocity, or the third derivative of position. Jerk is defined by any of the following equivalent expressions:
- is acceleration,
- is velocity,
- is position,
- is time.
Jerk is a vector, and there is no generally used term to describe its scalar magnitude (e.g., "speed" as the scalar magnitude for velocity).
The SI units of jerk are metres per second cubed (metres per second per second per second, m/s3, or m·s−3). There is no universal agreement on the symbol for jerk, but j is commonly used. Newton's notation for the derivative of acceleration can also be used, especially when "surge" or "lurch" is used instead of "jerk" or "jolt".
If acceleration can be felt by a body as the force (hence pressure) exerted by the object bringing about the acceleration on the body, jerk can be felt as the change in this pressure. For example a passenger in an accelerating vehicle with zero jerk will feel a constant force from the seat on his or her body; whereas positive jerk will be felt as increasing force on the body, and negative jerk as decreasing force on the body.
Normally concerning forces, speed and acceleration are used for analysis. For example, the "jerk" produced by falling from outer space to the Earth is not particularly useful given the gravitational acceleration changes very slowly. Sometimes the analysis has to extend to jerk for a particular reason.
Jerk is often used in engineering, especially when building roller coasters. Some precision or fragile objects — such as passengers, who need time to sense stress changes and adjust their muscle tension or suffer conditions such as whiplash — can be safely subjected not only to a maximum acceleration, but also to a maximum jerk. Even where occupant safety isn't an issue, excessive jerk may result in an uncomfortable ride on elevators, trams and the like, and engineers expend considerable design effort to minimize it. Jerk may be considered when the excitation of vibrations is a concern. A device that measures jerk is called a "jerkmeter".
Jerk is also important to consider in manufacturing processes. Rapid changes in acceleration of a cutting tool can lead to premature tool wear and result in uneven cuts. This is why modern motion controllers include jerk limitation features.
In mechanical engineering, jerk is considered, in addition to velocity and acceleration, in the development of cam profiles because of tribological implications and the ability of the actuated body to follow the cam profile without chatter.
Third-order motion profile
In motion control, a common need is to move a system from one steady position to another (point-to-point motion). Following the fastest possible motion within an allowed maximum value for speed, acceleration, and jerk, will result in a third-order motion profile as illustrated below:
The motion profile consists of up to seven segments defined by the following:
- acceleration build-up, with maximum positive jerk
- constant maximum acceleration (zero jerk)
- acceleration ramp-down, approaching the desired maximum velocity, with maximum negative jerk
- constant maximum speed (zero jerk, zero acceleration)
- deceleration build-up, approaching the desired deceleration, with maximum negative jerk
- constant maximum deceleration (zero jerk)
- deceleration ramp-down, approaching the desired position at zero velocity, with maximum positive jerk
If the initial and final positions are sufficiently close together, the maximum acceleration or maximum velocity may never be reached.
A jerk system is a system whose behavior is described by a jerk equation, which is an equation of the form (Sprott 2003):
For example, certain simple electronic circuits may be designed which are described by a jerk equation. These are known as jerk circuits.
One of the most interesting properties of jerk systems is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations, but which may be combined into a single (although rather complicated) jerk equation.
An example of a jerk equation is:
Where A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit:
In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency will be . The output of op amp 0 will correspond to the x variable, the output of 1 will correspond to the first derivative of x and the output of 2 will correspond to the second derivative.
Jerk can be difficult to conceptualize when it is defined in terms of calculus. When a force (push or pull) is applied to an object, that object starts to move. As long as the force is applied, the object will continue to speed up. When described in these terms, we are oversimplifying slightly. We think along the lines that there is no force on the object, then suddenly there is a force on the object. We do not think about how long it takes to apply the force.
However, in truth, the application of force does not instantly happen. A change always happens over time. Jerk is the change in acceleration over time. Typically, the time of contact where a force is applied is a split second.
If you push on a wall, it takes a fraction of a second before you apply the full push. Your fingertips will squoosh slightly as you begin to push. How long the squooshing takes determines the jerk. If you push on a wall very slowly, you can actually feel your push increasing. In such a case, the jerk is very low, because the change in force is happening over a relatively long time of several seconds. Jerk happens when a force is applied and removed. But the whole time a force is acting consistently on an object, there is no jerk. (This is because the acceleration is constant when there is a constant force.)
How quickly the force starts its push or pull determines the yank and subsequently the jerk. In most applications, it is not important how quickly the force is applied, and thus we typically think of forces being applied instantaneously. A familiar example of jerk is the rate of application of brakes in an automobile.
An experienced driver gradually applies the brakes, causing a slowly increasing deceleration (small jerk). An inexperienced driver, or a driver responding to an emergency, applies the brakes suddenly, causing a rapid increase in deceleration (large jerk). The sensation of jerk is noticeable, causing the passenger’s head to jerk forward.
- (yank: force per unit time)
- (jerk: acceleration per unit time)
- , following from Newton's second law divided by and the above two relations:
The higher the force or acceleration, the higher the jerk. The shorter the time of change in acceleration, such as a rollercoaster 'whipping' around a corner, the higher the jerk. For uniform jerk, the following equation can be applied:
where a : final acceleration a. : initial acceleration j : jerk (change in acceleration) v : final velocity u : initial velocity s : distance/displacement t : time taken
- Abraham–Lorentz force, a force in electrodynamics whose magnitude is proportional to jerk
- Shock (mechanics)
- Terminal velocity
- Wheeler-Feynman absorber theory
- Blair, G., "Making the Cam", Race Engine Technology 10, September/October 2005
- There is an idealization here that the jerk can be changed from zero to a constant non-zero value instantaneously. However, since in classical mechanics all forces are caused by smooth fields, all derivatives of the position are continuous. On the other hand, this is also an idealization; in quantum field theory particles do change momentum discontinuously.
- Sprott JC (2003). Chaos and Time-Series Analysis. Oxford University Press. ISBN 0-19-850839-5.
- Sprott JC (1997). "Some simple chaotic jerk functions" (PDF). Am J Phys 65 (6): 537–43. Bibcode:1997AmJPh..65..537S. doi:10.1119/1.18585. Retrieved 2009-09-28.
- Blair G (2005). "Making the Cam" (PDF). Race Engine Technology (010). Retrieved 2009-09-29.
- What is the term used for the third derivative of position?, description of jerk in the Usenet Physics FAQ.
- Mathematics of Motion Control Profiles | http://en.wikipedia.org/wiki/Jerk_(physics) | 13 |
82 | An important mathematical concept is the idea of
function. A function is a mathematical process that
uniquely relates the value of one variable
to the value of another variable in the problem.
Schematically, we can think of a function as a "processor" that
takes in one (or more) input variables and produces an output
variable. We call the output variable the dependent variable
and the input variable is called the independent variable.
Changing the value of an independent variable produces a change in the
value of the dependent variable that is always the same.
On this slide we will denote an independent variable as X
and the dependent variable as Y. We denote that Y is
a function of X by the symbol:
Y = f (X)
Some functions occur so often in math and science that
we assign special names and symbols to them.
At the bottom of the slide we have listed some examples of
functions that occur in aerospace engineering.
The trigonometric functions
sine, cosine, and tangent
relate the various sides and angles of a triangle.
The value of the function depends only on the
between two sides of a right triangle. So we can write the functions
Y = sin(X)
W = cos(X)
Z = tan(X)
where the value of X is an angle and the value of Y,
W, and Z are numbers.
Since the value of the function is always the same, the value can be
and used to solve problems.
Some examples of problems involving triangles and angles include the
forces on a model rocket during
the application of
and the resolution of the
of a vector.
Another function which you may have seen is the factorial function
with the symbol X!. The value of this function is formed by
multiplying X times X-1 times X-2, etc. until
you get to 1:
Y = X!
Y = 4! = 4 * 3 * 2 * 1 = 24
This function occurs in many probability and statistics problems.
There are some interesting properties of functions.
Functions can be grouped together to form other functions.
On the slide the polynomial:
Y = X^3 + X^2 + 5*X + 12
is made by adding powers of X. The function here is:
Y = X^n
where n is any number. Y is generated by multiplying
X times itself n times.
There are some special functions, called inverse functions,
which "un-do" the operation of some other function. The square root
function is the inverse of the square function. So if
Y = X^2
16 = 4^2
Y = sqrt(X)
4 = sqrt(16)
depend on the square of the
velocity, we often use square roots to solve
velocity problems. This function is used in the determination of the
terminal velocity of a
The function tan^-1(X) on the figure is called the arc-tangent
is the inverse of the trigonometric tangent function.
It returns the angle Y whose tangent is X.
There are inverses for the sine and cosine as well.
The exponential function, exp(X) or
e^X, is a special function that comes from calculus. In
calculus, we are often trying to determine the rate at which some
function changes. The rate is expressed as another function called
a differential and the rate is the slope of the graph of
the function. If we have a function Y = f(X), then the
slope of the function is called Z and;
Z = dY/dX
The exp function is the special function
whose slope (rate of change) is equal to the value of the function.
Y = exp(X) = d[exp(X)]/dX = dY/dX
This function often occurs in nature when the rate of change of
a variable equals the amount of the variable.
The change in
with altitude is an exponential.
The inverse of the exponential function is the logarithmic
function with the symbol ln(X). The ln function
appears in many
thermodynamics problems, such as
calculating the change in the
of a gas during a thermodynamic process.
The cosh (X) function on the slide is the hyperbolic cosine
function. It is a special, tabulated function that results from
solving a certain form of differential equation and is similar to
the trigonometric cos function. This function appears in the
solution to the problem of a line that sags
under its own weight like the support cables of
Beginner's Guide Home | http://www.grc.nasa.gov/WWW/K-12/rocket/function.html | 13 |
139 | Fire is a manifestation of uncontrolled combustion. It involves combustible materials which are found around us in the buildings in which we live, work and play, as well as a wide range of gases, liquids and solids which are encountered in industry and commerce. They are commonly carbon-based, and may be referred to collectively as fuels in the context of this discussion. Despite the wide variety of these fuels in both their chemical and physical states, in fire they share features that are common to them all. Differences are encountered in the ease with which fire can be initiated (ignition), the rate with which fire can develop (flame spread), and the power that can be generated (rate of heat release), but as our understanding of the science of fire improves, we become better able to quantify and predict fire behaviour and apply our knowledge to fire safety in general. The purpose of this section is to review some of the underlying principles and provide guidance to an understanding of fire processes.
Combustible materials are all around us. Given the appropriate circumstances, they can be made to burn by subjecting them to an ignition source which is capable of initiating a self-sustaining reaction. In this process, the “fuel” reacts with oxygen from the air to release energy (heat), while being converted to products of combustion, some of which may be harmful. The mechanisms of ignition and burning need to be clearly understood.
Most everyday fires involve solid materials (e.g., wood, wood products and synthetic polymers), although gaseous and liquid fuels are not uncommon. A brief review of the combustion of gases and liquids is desirable before some of the basic concepts are discussed.
A flammable gas (e.g., propane, C3H8) can be burned in two ways: a stream or jet of gas from a pipe (cf. the simple Bunsen burner with the air inlet closed) can be ignited and will burn as a diffusion flame in which burning occurs in those regions where gaseous fuel and air mix by diffusive processes. Such a flame has a characteristic yellow luminosity, indicating the presence of minute soot particles formed as a result of incomplete combustion. Some of these will burn in the flame, but others will emerge from the flame tip to form smoke.
If the gas and air are intimately mixed before ignition, then premixed combustion will occur, provided that the gas/air mixture lies within a range of concentrations bounded by the lower and upper flammability limits (see table 41.1). Outside these limits, the mixture is non-flammable. (Note that a premixed flame is stabilized at the mouth of a Bunsen burner when the air inlet is open.) If a mixture is flammable, then it can be ignited by a small ignition source, such as an electrical spark. The stoichiometric mixture is the most readily ignited, in which the amount of oxygen present is in the correct proportion to burn all the fuel to carbon dioxide and water (see accompanying equation, below, in which nitrogen can be seen to be present in the same proportion as in air but does not take part in the reaction). Propane (C3H8) is the combustible material in this reaction:
C3H8 + 5O2 + 18.8N2 = 3CO2 + 4H2O + 18.8N2
Lower flammability limit (% by volume)
Upper flammability limit (% by volume)
An electrical discharge as small as 0.3 mJ is sufficient to ignite a stoichiometric propane/air mixture in the reaction illustrated. This represents a barely perceptible static spark, as experienced by someone who has walked across a synthetic carpet and touched a grounded object. Even smaller amounts of energy are required for certain reactive gases such as hydrogen, ethylene and ethyne. In pure oxygen (as in the reaction above, but with no nitrogen present as a diluent), even lower energies are sufficient.
The diffusion flame associated with a flow of gaseous fuel exemplifies the mode of burning that is observed when a liquid or solid fuel is undergoing flaming combustion. However, in this case, the flame is fed by fuel vapours generated at the surface of the condensed phase. The rate of supply of these vapours is coupled to their rate of burning in the diffusion flame. Energy is transferred from the flame to the surface, thus providing the energy necessary to produce the vapours. This is a simple evaporative process for liquid fuels, but for solids, enough energy must be provided to cause chemical decomposition of the fuel, breaking large polymeric molecules into smaller fragments which can vaporize and escape from the surface. This thermal feedback is essential to maintain the flow of vapours, and hence support the diffusion flame (figure 41.1). Flames can be extinguished by interfering with this process in a number of ways (see below).
An understanding of heat (or energy) transfer is the key to an understanding of fire behaviour and fire processes. The subject deserves careful study. There are many excellent texts to which one may turn (Welty, Wilson and Wicks 1976; DiNenno 1988), but for the present purposes it is necessary only to draw attention to the three mechanisms: conduction, convection and radiation. The basic equations for steady-state heat transfer () are:
Conduction is relevant to heat transfer through solids; (k is a material property known as thermal conductivity (kW/mK ) and l is the distance (m) over which the temperature falls from T1 to T2 (in degrees Kelvin). Convection in this context refers to the transfer of heat from a fluid (in this case, air, flames or fire products) to a surface (solid or liquid); h is the convective heat transfer coefficient kW/m2K) and depends on the configuration of the surface and nature of the flow of fluid past that surface. Radiation is similar to visible light (but with a longer wavelength) and requires no intervening medium (it can traverse a vacuum); ε is the emissivity (efficiency by which a surface can radiate), σ is the Stefan-Boltzman constant (56.7 x 10-12 kW/m2K4). Thermal radiation travels at the speed of light (3 x 108 m/s) and an intervening solid object will cast a shadow.
Heat transfer from flames to the surface of condensed fuels (liquids and solids) involves a mixture of convection and radiation, although the latter dominates when the effective diameter of the fire exceeds 1 m. The rate of burning (m, (g/s)) can be expressed by the formula:
is the heat flux from the flame to the surface (kW/m2); is the heat loss from the surface (e.g., by radiation, and by conduction through the solid) expressed as a flux (kW/m2); Afuel is the surface area of the fuel (m2); and Lv is the heat of gasification (equivalent to the latent heat of evaporation for a liquid) (kJ/g). If a fire develops in a confined space, the hot smoky gases rising from the fire (driven by buoyancy) are deflected beneath the ceiling, heating the upper surfaces. The resulting smoke layer and the hot surfaces radiate down to the lower part of the enclosure, in particular to the fuel surface, thus increasing the rate of burning:
where is the extra heat supplied by radiation from the upper part of the enclosure (kW/m2). This additional feedback leads to greatly enhanced rates of burning and to the phenomenon of flashover in enclosed spaces where there is an adequate supply of air and sufficient fuel to sustain the fire (Drysdale 1985).
The rate of burning is moderated by the magnitude of the value of Lv, the heat of gasification. This tends to be low for liquids and relatively high for solids. Consequently, solids tend to burn much more slowly than liquids.
It has been argued that the most important single parameter which determines the fire behaviour of a material (or assembly of materials) is the rate of heat release (RHR) which is coupled to the rate of burning through the equation:
where ΔHc is the effective heat of combustion of the fuel (kJ/g). New techniques are now available for measuring the RHR at different heat fluxes (e.g., the Cone Calorimeter), and it is now possible to measure the RHR of large items, such as upholstered furniture and wall linings in large-scale calorimeters which use oxygen consumption measurements to determine the rate of heat release (Babrauskas and Grayson 1992).
It should be noted that as a fire grows in size, not only does the rate of heat release increase, but the rate of production of “fire products” also increases. These contain toxic and noxious species as well as particulate smoke, the yields of which will increase when a fire developing in a building enclosure becomes underventilated.
Ignition of a liquid or solid involves raising the surface temperature until vapours are being evolved at a rate sufficient to support a flame after the vapours have been ignited. Liquid fuels can be classified according to their flashpoints, the lowest temperature at which there is a flammable vapour/air mixture at the surface (i.e., the vapour pressure corresponds to the lower flammability limit). These can be measured using a standard apparatus, and typical examples are given in table 41.2. A slightly higher temperature is required to produce a sufficient flow of vapours to support a diffusion flame. This is known as the firepoint. For combustible solids, the same concepts are valid, but higher temperatures are required as chemical decomposition is involved. The firepoint is typically in excess of 300 °C, depending on the fuel. In general, flame-retarded materials have significantly higher firepoints (see table 41.2).
Closed cup flashpoint1 (°C)
Gasoline (100 Octane) (l)
FR polymethylmethacrylate (s)
FR polypropylene (s)
FR polystyrene (s)
l = liquid; s = solid.
1 By Pensky-Martens closed cup apparatus.
2 Liquids: by Cleveland open cup apparatus. Solids: Drysdale and Thomson (1994).
(Note that the results for the flame-retarded species refer to a heat flux of 37 kW/m2).
Ease of ignition of a solid material is therefore dependent on the ease with which its surface temperature can be raised to the firepoint, e.g., by exposure to radiant heat or to a flow of hot gases. This is less dependent on the chemistry of the decomposition process than on the thickness and physical properties of the solid, namely, its thermal conductivity (k), density (ρ) and heat capacity (c). Thin solids, such as wood shavings (and all thin sections), can be ignited very easily because they have a low thermal mass, that is, relatively little heat is required to raise the temperature to the firepoint. However, when heat is transferred to the surface of a thick solid, some will be conducted from the surface into the body of the solid, thus moderating the temperature rise of the surface. It can be shown theoretically that the rate of rise of the surface temperature is determined by the thermal inertia of the material, that is, the product kρc. This is borne out in practice, since thick materials with a high thermal inertia (e.g., oak, solid polyurethane) will take a long time to ignite under a given heat flux, whereas under identical conditions thick materials with a low thermal inertia (e.g., fibre insulating board, polyurethane foam) will ignite quickly (Drysdale 1985).
Ignition is illustrated schematically in figure 41.2 (piloted ignition). For successful ignition, an ignition source must be capable not only of raising the surface temperature to the firepoint, or above, but it must also cause the vapours to ignite. An impinging flame will act in both capacities, but an imposed radiative flux from a remote source may lead to the evolution of vapours at a temperature above the firepoint, without the vapours igniting. However, if the evolved vapours are hot enough (which requires the surface temperature to be much higher than the firepoint), they may ignite spontaneously as they mix with air. This process is known as spontaneous ignition.
A large number of ignition sources can be identified, but they have one thing in common, which is that they are the result of some form of carelessness or inaction. A typical list would include naked flames, “smokers’ materials”, frictional heating, electrical devices (heaters, irons, cookers, etc.) and so on. An excellent survey may be found in Cote (1991). Some of these are summarized in table 41.3 .
Electrically powered equipment
Electric heaters, hair dryers, electric blankets, etc.
Open flame source
Match, cigarette lighter, blow torch, etc.
Gas fire, space heater, cooker, etc.
Other fuelled equipment
Wood stove, etc.
Lighted tobacco product
Cigar, pipe, etc.
Hot pipes, mechanical sparks, etc.
Exposure to heating
Adjacent fire, etc.
Linseed oil-soaked rags, coal piles, etc.
Rare-e.g., potassium permanganate with glycerol
It should be noted that smouldering cigarettes cannot initiate flaming combustion directly (even in common gaseous fuels), but can cause smouldering in materials which have the propensity to undergo this type of combustion. This is observed only with materials which char on heating. Smouldering involves the surface oxidation of the char, which generates enough heat locally to produce fresh char from adjacent unburnt fuel. It is a very slow process, but may eventually undergo a transition to flaming. Thereafter, the fire will develop very rapidly.
Materials which have the propensity to smoulder can also exhibit the phenomenon of self-heating (Bowes 1984). This arises when such a material is stored in large quantities and in such a way that heat generated by slow surface oxidation cannot escape, leading to a rise in temperature within the mass. If the conditions are right, this can lead to a runaway process ultimately developing into a smouldering reaction at depth within the material.
A major component in the growth of any fire is the rate at which flame will spread over adjacent combustible surfaces. Flame spread can be modelled as an advancing ignition front in which the leading edge of the flame acts as an ignition source for the fuel that is not yet burning. The rate of spread is determined partly by the same material properties that control the ease of ignition and partly by the interaction between the existing flame and the surface ahead of the front. Upward, vertical spread is the most rapid as buoyancy ensures that the flames flow upwards, exposing the surface above the burning area to direct heat transfer from the flames. This should be contrasted with spread over a horizontal surface when the flames from the burning area rise vertically, away from the surface. Indeed, it is common experience that vertical spread is the most hazardous (e.g., flame spread on curtains and drapes and on loose clothing such as dresses and nightgowns).
The rate of spread is also affected by an imposed radiant heat flux. In the development of a fire in a room, the area of the fire will grow more rapidly under the increasing level of radiation that builds up as the fire progresses. This will contribute to the acceleration of fire growth that is characteristic of flashover.
Fire extinction and suppression can be examined in terms of the above outline of the theory of fire. The gas phase combustion processes (i.e., the flame reactions) are very sensitive to chemical inhibitors. Some of the flame retardants used to improve the “fire properties” of materials rely on the fact that small amounts of inhibitor released with the fuel vapours will suppress the establishment of flame. The presence of a flame retardant cannot render a combustible material non-combustible, but it can make ignition more difficultperhaps preventing ignition altogether provided that the ignition source is small. However, if a flame-retarded material becomes involved in an existing fire, it will burn as the high heat fluxes overwhelm the effect of the retardant.
Extinction of a fire may be achieved in a number of ways:
1. stopping the supply of fuel vapours
2. quenching the flame by chemical extinguishers (inhibiting)
3. removing the supply of air (oxygen) to the fire (smothering)
The first method, stopping the supply of fuel vapours, is clearly applicable to a gas-jet fire in which the supply of the fuel can simply be turned off. However, it is also the most common and safest method of extinguishing a fire involving condensed fuels. In the case of a fire involving a solid, this requires the fuel surface to be cooled below the firepoint, when the flow of vapours becomes too small to support a flame. This is achieved most effectively by the application of water, either manually or by means of an automatic system (sprinklers, water spray, etc.). In general, liquid fires cannot be dealt with in this manner: liquid fuels with low firepoints simply cannot be cooled sufficiently, while in the case of a high-firepoint fuel, vigorous vaporization of water when it comes into contact with the hot liquid at the surface can lead to burning fuel being ejected from the container. This can have very serious consequences for those fighting the fire. (There are some special cases in which an automatic high-pressure water-spray system may be designed to deal with the latter type of fire, but this is not common.)
Liquid fires are commonly extinguished by the use of fire-fighting foams (Cote 1991). This is produced by aspirating a foam concentrate into a stream of water which is then directed at the fire through a special nozzle which permits air to be entrained into the flow. This produces a foam which floats on top of the liquid, reducing the rate of supply of fuel vapours by a blockage effect and by shielding the surface from heat transfer from the flames. The foam has to be applied carefully to form a “raft” which gradually increases in size to cover the liquid surface. The flames will decrease in size as the raft grows, and at the same time the foam will gradually break down, releasing water which will aid the cooling of the surface. The mechanism is in fact complex, although the net result is to control the flow of vapours.
There are a number of foam concentrates available, and it is important to choose one that is compatible with the liquids that are to be protected. The original “protein foams” were developed for hydrocarbon liquid fires, but break down rapidly if brought into contact with liquid fuels that are water soluble. A range of “synthetic foams” have been developed to tackle the entire range of liquid fires that may be encountered. One of these, aqueous film-forming foam (AFFF), is an all-purpose foam which also produces a film of water on the surface of the liquid fuel, thus increasing its effectiveness.
This method makes use of chemical suppressants to extinguish the flame. The reactions which occur in the flame involve free radicals, a highly reactive species which have only a fleeting existence but are continuously regenerated by a branched chain process that maintains high enough concentrations to allow the overall reaction (e.g., an R1 type reaction) to proceed at a fast rate. Chemical suppressants applied in sufficient quantity will cause a dramatic fall in the concentration of these radicals, effectively quenching the flame. The most common agents that operate in this way are the halons and dry powders.
Halons react in the flame to generate other intermediate species with which the flame radicals react preferentially. Relatively small amounts of the halons are required to extinguish a fire, and for this reason they were traditionally considered highly desirable; extinguishing concentrations are “breathable” (although the products generated while passing through the flame are noxious). Dry powders act in a similar fashion, but under certain circumstances are much more effective. Fine particles are dispersed into the flame and cause termination of the radical chains. It is important that the particles are small and numerous. This is achieved by the manufacturers of many proprietary brands of dry powders by selecting a powder that “decrepitates”, that is, the particles fragment into smaller particles when they are exposed to the high temperatures of the flame.
For a person whose clothing has caught fire, a dry powder extinguisher is recognized as the best method to control flames and to protect that individual. Rapid intervention gives rapid “knockdown”, thus minimizing injury. However, the flame must be completely extinguished because the particles quickly fall to the ground and any residual flaming will quickly regain hold. Similarly, halons will only remain effective if the local concentrations are maintained. If it is applied out of doors, the halon vapour rapidly disperses, and once again the fire will rapidly re-establish itself if there is any residual flame. More significantly, the loss of the suppressant will be followed by re-ignition of the fuel if the surface temperatures are high enough. Neither halons nor dry powders have any significant cooling effect on the fuel surface.
The following description is an oversimplification of the process. While “removing the supply of air” will certainly cause the fire to extinguish, to do this it is only necessary to reduce the oxygen concentration below a critical level. The well-known “oxygen index test” classifies combustible materials according to the minimum oxygen concentration in an oxygen/nitrogen mixture that will just support flaming. Many common materials will burn at oxygen concentrations down to approximately 14% at ambient temperatures (ca. 20 °C) and in the absence of any imposed heat transfer. The critical concentration is temperature dependent, decreasing as the temperature is increased. Thus, a fire that has been burning for some time will be capable of supporting flames at concentrations perhaps as low as 7%. A fire in a room may be held in check and may even self-extinguish if the supply of oxygen is limited by keeping doors and windows closed. Flaming may cease, but smouldering will continue at very much lower oxygen concentrations. Admission of air by opening a door or breaking a window before the room has cooled sufficiently can lead to a vigorous eruption of the fire, known as backdraught, or backdraft.
“Removal of air” is difficult to achieve. However, an atmosphere may be rendered “inert” by total flooding by means of a gas which will not support combustion, such as nitrogen, carbon dioxide or gases from a combustion process (e.g., a ship’s engines) which are low in oxygen and high in carbon dioxide. This technique can only be used in enclosed spaces as it is necessary to maintain the required concentration of the “inert gas” until either the fire has extinguished completely or fire-fighting operations can begin. Total flooding has special applications, such as for ships’ holds and rare book collections in libraries. The required minimum concentrations of the inert gases are shown in table 41.4 . These are based on the assumption that the fire is detected at an early stage and that the flooding is carried out before too much heat has accumulated in the space.
Minimum concentration (% volume)
“Removal of air” can be effected in the immediate vicinity of a small fire by local application of a suppressant from an extinguisher. Carbon dioxide is the only gas that is used in this way. However, as this gas quickly disperses, it is essential to extinguish all flaming during the attack on the fire; otherwise, flaming will re-establish itself. Re-ignition is also possible because carbon dioxide has little if any cooling effect. It is worth noting that a fine water spray entrained into a flame can cause extinction as the combined result of evaporation of the droplets (which cools the burning zone) and reduction of the oxygen concentration by dilution by water vapour (which acts in the same way as carbon dioxide). Fine water sprays and mists are being considered as possible replacements for halons.
It is appropriate to mention here that it is inadvisable to extinguish a gas flame unless the gas flow can be stopped immediately thereafter. Otherwise, a substantial volume of flammable gas may build up and subsequently ignite, with potentially serious consequences.
This method is included here for completeness. A match flame can easily be blown out by increasing the air velocity above a critical value in the vicinity of the flame. The mechanism operates by destabilizing the flame in the vicinity of the fuel. In principle, larger fires can be controlled in the same way, but explosive charges are normally required to generate sufficient velocities. Oil well fires can be extinguished in this manner.
Finally, a common feature that needs to be emphasized is that the ease with which a fire can be extinguished decreases rapidly as the fire increases in size. Early detection permits extinction with minimal quantities of suppressant, with reduced losses. In choosing a suppressant system, one should take into account the potential rate of fire development and what type of detection system is available.
An explosion is characterized by the sudden release of energy, producing a shock wave, or blast wave, that may be capable of causing remote damage. There are two distinct types of sources, namely, the high explosive and the pressure burst. The high explosive is typified by compounds such as trinitrotoluene (TNT) and cyclotrimethylenetrinitramine (RDX). These compounds are highly exothermic species, decomposing to release substantial quantities of energy. Although thermally stable (although some are less so and require desensitization to make them safe to handle), they can be induced to detonate, with decomposition, propagating at the velocity of sound through the solid. If the amount of energy released is high enough, a blast wave will propagate from the source with the potential to do significant damage at a distance.
By assessing remote damage, one can estimate the size of the explosion in terms of “TNT equivalent” (normally in metric tons). This technique relies on the large amount of data that has been gathered on the damage potential of TNT (much of it during wartime), and uses empirical scaling laws which have been developed from studies of the damage caused by known quantities of TNT.
In peacetime, high explosives are used in a variety of activities, including mining, quarrying and major civil engineering works. Their presence on a site represents a particular hazard that requires specific management. However, the other source of “explosions” can be equally devastating, particularly if the hazard has not been recognized. Overpressures leading to pressure bursts can be the result of chemical processes within plants or from purely physical effects, as will occur if a vessel is heated externally, leading to overpressurization. The term BLEVE (boiling liquid expanding vapour explosion) has its origins here, referring originally to the failure of steam boilers. It is now also commonly used to describe the event in which a pressure vessel containing a liquefied gas such as LPG (liquified petroleum gas) fails in a fire, releasing the flammable contents, which then ignite to produce a “fireball”.
On the other hand, the overpressure may be caused internally by a chemical process. In the process industries, self-heating can lead to a runaway reaction, generating high temperatures and pressures capable of causing a pressure burst. However, the most common type of explosion is caused by the ignition of a flammable gas/air mixture which is confined within an item of a plant or indeed within any confining structure or enclosure. The prerequisite is the formation of a flammable mixture, an occurrence which should be avoided by good design and management. In the event of an accidental release, a flammable atmosphere will exist wherever the concentration of the gas (or vapour) lies between the lower and upper flammability limits (table 41.1). If an ignition source is introduced to one of these regions, a premixed flame will propagate rapidly from the source, converting the fuel/air mixture into combustion products at an elevated temperature. This can be as high as 2,100 K, indicating that in a completely closed system initially at 300 K, an overpressure as high as 7 bars is possible. Only specially designed pressure vessels are capable of containing such overpressures. Ordinary buildings will fall unless protected by pressure relief panels or bursting discs or by an explosion suppression system. Should a flammable mixture form within a building, the subsequent explosion can cause significant structural damageperhaps total destructionunless the explosion can vent to the outside through openings (e.g., the failure of windows) created during the early stages of the explosion.
Explosions of this type are also associated with the ignition of dust suspensions in air (Palmer 1973). These are encountered when there is a substantial accumulation of “explosible” dust which is dislodged from shelves, rafters and ledges within a building to form a cloud, which is then exposed to an ignition source (e.g., in flour mills, grain elevators, etc.). The dust must (obviously) be combustible, but not all combustible dusts are explosible at ambient temperatures. Standard tests have been designed to determine whether a dust is explosible. These can also be used to illustrate that explosible dusts exhibit “explosibility limits”, similar in concept to the “flammability limits” of gases and vapours. In general, a dust explosion has the potential to do a great deal of damage because the initial event may cause more dust to be dislodged, forming an even greater dust cloud which will inevitably ignite, to produce an even greater explosion.
Explosion venting, or explosion relief, will only operate successfully if the rate of development of the explosion is relatively slow, such as associated with the propagation of a premixed flame through a stationary flammable mixture or an explosible dust cloud. Explosion venting is of no use if detonation is involved. The reason for this is that the pressure relief openings have to be created at an early stage of the event when the pressure is still relatively low. If a detonation occurs, the pressure rises too rapidly for relief to be effective, and the enclosing vessel or item of a plant experiences very high internal pressures which will lead to massive destruction. Detonation of a flammable gas mixture can occur if the mixture is contained within a long pipe or duct. Under certain conditions, propagation of the premixed flame will push the unburnt gas ahead of the flame front at a rate that will increase turbulence, which in turn will increase the rate of propagation. This provides a feedback loop which will cause the flame to accelerate until a shock wave is formed. This, combined with the combustion process, is a detonation wave which can propagate at velocities well in excess of 1,000 m/s. This may be compared with the fundamental burning velocity of a stoichiometric propane/air mixture of 0.45 m/s. (This is the rate at which a flame will propagate through a quiescent (i.e., non-turbulent) propane/air mixture.)
The importance of turbulence on the development of this type of explosion cannot be underestimated. The successful operation of an explosion protection system relies on early venting or early suppression. If the rate of development of the explosion is too fast, then the protection system will not be effective, and unacceptable overpressures can be produced.
An alternative to explosion relief is explosion suppression. This type of protection requires that the explosion is detected at a very early stage, as close to ignition as possible. The detector is used to initiate the rapid release of a suppressant into the path of the propagating flame, effectively arresting the explosion before the pressure has increased to an extent at which the integrity of the enclosing boundaries is threatened. The halons have been commonly used for this purpose, but as these are being phased out, attention is now being paid to the use of high-pressure water-spray systems. This type of protection is very expensive and has limited application as it can only be used in relatively small volumes within which the suppressant can be distributed quickly and uniformly (e.g., ducts carrying flammable vapour or explosible dusts).
In general terms, fire science has only recently been developed to a stage at which it is capable of providing the knowledge base on which rational decisions regarding engineering design, including safety issues, can be based. Traditionally, fire safety has developed on an ad hoc basis, effectively responding to incidents by imposing regulations or other restrictions to ensure that there will be no re-occurrence. Many examples could be quoted. For example, the Great Fire of London in 1666 led in due course to the establishment of the first building regulations (or codes) and the development of fire insurance. More recent incidents, such as the high-rise office block fires in São Paulo, Brazil, in 1972 and 1974, initiated changes to the building codes, framed in such a way as to prevent similar multiple-fatality fires in the future. Other problems have been addressed in a similar fashion. In California in the United States, the hazard associated with certain types of modern upholstered furniture (particularly those containing standard polyurethane foam) was recognized, and eventually strict regulations were introduced to control its availability.
These are simple cases in which observations of the consequences of fire have led to the imposition of a set of rules intended to improve the safety of the individual and the community in the event of fire. The decision for action on any issue has to be justified on the basis of an analysis of our knowledge of fire incidents. It is necessary to show that the problem is real. In some casessuch as the São Paulo firesthis exercise is academic, but in others, such as “proving” that modern furnishings are a problem, it is necessary to ensure that the associated costs are wisely spent. This requires a reliable database on fire incidents which over a number of years is capable of showing trends in the number of fires, the number of fatalities, the incidence of a particular type of ignition, etc. Statistical techniques can then be used to examine whether a trend, or a change, is significant, and appropriate measures taken.
In a number of countries, the fire brigade is required to submit a report on each fire attended. In the United Kingdom and the United States, the officer in charge completes a report form which is then submitted to a central organization (the Home Office in the United Kingdom, the National Fire Protection Association, NFPA, in the United States) which then codes and processes the data in a prescribed fashion. The data are then available for inspection by government bodies and other interested parties. These databases are invaluable in highlighting (for example) the principal sources of ignition and the items first ignited. An examination of the incidence of fatalities and their relationship to sources of ignition, etc. has shown that the number of people who die in fires started by smokers’ materials is significantly out of proportion with the number of fires which originate in this way.
The reliability of these databases depends on the skill with which the fire officers carry out the fire investigation. Fire investigation is not an easy task, and requires considerable ability and knowledgein particular, a knowledge of fire science. The Fire Service in the United Kingdom has a statutory duty to submit a fire report form for every fire attended, which places a considerable responsibility on the officer in charge. The construction of the form is crucial, as it must elicit the required information in sufficient detail. The “Basic Incident Report Form” recommended by the NFPA is shown in the Fire Protection Handbook (Cote 1991).
The data can be used in two ways, either to identify a fire problem or to provide the rational argument necessary to justify a particular course of action that may require public or private expenditure. A long-established database can be used to show the effects of actions taken. The following ten points have been gleaned from NFPA statistics over the period 1980 to 1989 (Cote 1991):
1. Home smoke detectors are widely used and very effective (but significant gaps in the detector strategy remain).
2. Automatic sprinklers produce large reductions in loss of life and property.
3. Increased use of portable and area heating equipment sharply increased home fires involving heating equipment.
4. Incendiary and suspicious fires continued to decline from the 1970’s peak, but associated property damage stopped declining.
5. A large share of fire-fighter fatalities are attributed to heart attacks and activities away from the fireground.
6. Rural areas have the highest fire death rates.
7. Smoking materials igniting upholstered furniture, mattresses or bedding produce the most deadly residential fire scenarios.
8. US and Canadian fire death rates are amongst the highest of all the developed countries.
9. The states of the Old South in the United States have the highest fire death rates.
10. Older adults are at particularly high risk of death in fire.
Such conclusions are, of course, country-specific, although there are some common trends. Careful use of such data can provide the means of formulating sound policies regarding fire safety in the community. However, it must be remembered that these are inevitably “reactive”, rather than “proactive”. Proactive measures can only be introduced following a detailed fire hazard assessment. Such a course of action has been introduced progressively, starting in the nuclear industry and moving into the chemical, petrochemical and offshore industries where the risks are much more easily defined than in other industries. Their application to hotels and public buildings generally is much more difficult and requires the application of fire modelling techniques to predict the course of a fire and how the fire products will spread through the building to affect the occupants. Major advances have been made in this type of modelling, although it must be said that there is a long way to go before these techniques can be used with confidence. Fire safety engineering is still in need of much basic research in fire safety science before reliable fire hazard assessment tools can be made widely available.
Fire and combustion have been defined in various ways. For our purposes, the most important statements in connection with combustion, as a phenomenon, are as follows:
· Combustion represents a self-sustaining run of reactions consisting of physical and chemical transformations.
· The materials involved enter into reaction with the oxidizing agent in their surroundings, which in most cases is with the oxygen in the air.
· Ignition requires favourable starting conditions, which are generally a sufficient heating up of the system that covers the initial energy demand of the chain reaction of burning.
· The resultant of the reactions are often exothermic, which means that during burning, heat is released and this phenomenon is often accompanied by visibly observable flaming.
Ignition may be considered the first step of the self-sustaining process of combustion. It may occur as piloted ignition (or forced ignition) if the phenomenon is caused by any outer ignition source, or it may occur as auto ignition (or self ignition) if the phenomenon is the result of reactions taking place in the combustible material itself and coupled with heat release.
The inclination to ignition is characterized by an empirical parameter, the ignition temperature (i.e., the lowest temperature, to be determined by test, to which the material has to be heated to for ignition). Depending upon whether or not this parameter is determinedwith special test methodsby the use of any ignition source, we distinguish between the piloted ignition temperature and the auto ignition temperature.
In the case of piloted ignition, the energy required for the activation of the materials involved in the burning reaction is supplied by ignition sources. However, there is no direct relationship between the heat quantity needed for ignition and the ignition temperature, because although the chemical composition of the components in the combustible system is an essential parameter of ignition temperature, it is considerably influenced by the sizes and shapes of materials, the pressure of the environment, conditions of air flow, parameters of ignition source, the geometrical features of the testing device, etc. This is the reason for which the data published in literature for autoignition temperature and piloted ignition temperature can be significantly different.
The ignition mechanism of materials in different states may be simply illustrated. This involves examining materials as either solids, liquids or gases.
Most solid materials take up energy from any outer ignition source either by conduction, convection or radiation (mostly by their combination), or are heated up as a result of the heat-producing processes taking place internally that start decomposition on their surfaces.
For ignition to occur with liquids, these must have the formation of a vapour space above their surface that is capable of burning. The vapours released and the gaseous decomposition products mix with the air above the surface of liquid or solid material.
The turbulent flows that arise in the mixture and/or the diffusion help the oxygen to reach the molecules, atoms and free radicals on and above the surface, which are already suitable for reaction. The particles induced enter into interaction, resulting in the release of heat. The process steadily accelerates, and as the chain reaction starts, the material comes to ignition and burns.
The combustion in the layer under the surface of solid combustible materials is called smouldering, and the burning reaction taking place on the interface of solid materials and gas is called glowing. Burning with flames (or flaming) is the process in the course of which the exothermic reaction of burning runs in the gas phase. This is typical for the combustion of both liquid and solid materials.
Combustible gases burn naturally in the gas phase. It is an important empirical statement that the mixtures of gases and air are capable of ignition in a certain range of concentration only. This is valid also for the vapours of liquids. The lower and upper flammable limits of gases and vapours depend on the temperature and pressure of the mixture, the ignition source and the concentration of the inert gases in the mixture.
The phenomena supplying heat energy may be grouped into four fundamental categories as to their origin (Sax 1979):
1. heat energy generated during chemical reactions (heat of oxidation, heat of combustion, heat of solution, spontaneous heating, heat of decomposition, etc.)
2. electrical heat energy (resistance heating, induction heating, heat from arcing, electric sparks, electrostatical discharges, heat generated by lightning stroke, etc.)
3. mechanical heat energy (frictional heat, friction sparks)
4. heat generated by nuclear decomposition.
The following discussion addresses the most frequently encountered sources of ignition.
Open flames may be the simplest and most frequently used ignition source. A large number of tools in general use and various types of technological equipment operate with open flames, or enable the formation of open flames. Burners, matches, furnaces, heating equipment, flames of welding torches, broken gas and oil pipes, etc. may practically be considered potential ignition sources. Because with an open flame the primary ignition source itself represents an existing self-sustaining combustion, the ignition mechanism means in essence the spreading of burning to another system. Provided that the ignition source with open flame possesses sufficient energy for initiating ignition, burning will start.
The chemical reactions generating heat spontaneously imply the risk of ignition and burning as “internal ignition sources”. The materials inclined to spontaneous heating and spontaneous ignition may, however, become secondary ignition sources and give rise to ignition of the combustible materials in the surroundings.
Although some gases (e.g., hydrogen phosphide, boron hydride, silicon hydride) and liquids (e.g., metal carbonyls, organometallic compositions) are inclined to spontaneous ignition, most spontaneous ignitions occur as surface reactions of solid materials. Spontaneous ignition, like all ignitions, depends on the chemical structure of the material, but its occurrence is determined by the grade of dispersity. The large specific surface enables the local accumulation of reaction heat and contributes to the increase of temperature of material above spontaneous ignition temperature.
Spontaneous ignition of liquids is also promoted if they come into contact with air on solid materials of large specific surface area. Fats and especially unsaturated oils containing double bonds, when absorbed by fibrous materials and their products, and when impregnated into textiles of plant or animal origin, are inclined to spontaneous ignition under normal atmospheric conditions. Spontaneous ignition of glass-wool and mineral-wool products produced from non-combustible fibres or inorganic materials covering large specific surfaces and contaminated by oil have caused very severe fire accidents.
Spontaneous ignition has been observed mainly with dusts of solid materials. For metals with good heat conductivity, local heat accumulation needed for ignition necessitates very fine crushing of metal. As the particle size decreases, the likelihood of spontaneous ignition increases, and with some metal dusts (for example, iron) pyrophorosity ensues. When storing and handling coal dust, soot of fine distribution, dusts of lacquers and synthetic resins, as well as during the technological operations carried out with them, special attention should be given to the preventive measures against fire to reduce the hazard of spontaneous ignition.
Materials inclined to spontaneous decomposition show special ability to ignite spontaneously. Hydrazine, when set on any material with a large surface area, bursts into flames immediately. The peroxides, which are widely used by the plastics industry, easily decompose spontaneously, and as a consequence of decomposition, they become dangerous ignition sources, occasionally initiating explosive burning.
The violent exothermic reaction that occurs when certain chemicals come into contact with each other may be considered a special case of spontaneous ignition. Examples of such cases are contact of concentrated sulphuric acid with all the organic combustible materials, chlorates with sulphur or ammonium salts or acids, the organic halogen compounds with alkali metals, etc. The feature of these materials to be “unable to bear each other” (incompatible materials) requires special attention particularly when storing and co-storing them and elaborating the regulations of fire-fighting.
It is worth mentioning that such hazardously high spontaneous heating may, in some cases, be due to the wrong technological conditions (insufficient ventilation, low cooling capacity, discrepancies of maintenance and cleaning, overheating of reaction, etc.), or promoted by them.
Certain agricultural products, such as fibrous feedstuffs, oily seeds, germinating cereals, final products of the processing industry (dried beetroot slices, fertilizers, etc.), show an inclination for spontaneous ignition. The spontaneous heating of these materials has a special feature: the dangerous temperature conditions of the systems are exacerbated by some exothermic biological processes that cannot be controlled easily.
Power machines, instruments and heating devices operated by electric energy, as well as the equipment for power transformation and lighting, typically do not present any fire hazard to their surroundings, provided that they have been installed in compliance with the relevant regulations of safety and requirements of standards and that the associated technological instructions have been observed during their operation. Regular maintenance and periodic supervision considerably diminish the probability of fires and explosions. The most frequent causes of fires in electric devices and wiring are overloading, short circuits, electric sparks and high contact resistances.
Overloading exists when the wiring and electrical appliances are exposed to higher current than that for which they are designed. The overcurrent passing through the wiring, devices and equipment might lead to such an overheating that the overheated components of the electrical system become damaged or broken, grow old or carbonize, resulting in cord and cable coatings melting down, metal parts glowing and the combustible structural units coming to ignition and, depending on the conditions, also spreading fire to the environment. The most frequent cause of overloading is that the number of consumers connected is higher than permitted or their capacity exceeds the value stipulated.
The working safety of electrical systems is most frequently endangered by short circuits. They are always the consequences of any damage and occur when the parts of the electrical wiring or the equipment at the same potential level or various potential levels, insulated from each other and the earth, come into contact with each other or with the earth. This contact may arise directly as metal-metal contact or indirectly, through electric arc. In cases of short circuits, when some units of the electric system come in contact with each other, the resistance will be considerably lower, and as a consequence, the intensity of current will be extremely high, perhaps by several orders of magnitude higher. The heat energy released during overcurrents with large short circuits might result in a fire in the device affected by the short circuit, with the materials and equipment in the surrounding area coming to ignition and with the fire spreading to the building.
Electric sparks are heat energy sources of a small nature, but as shown by experience, act frequently as ignition sources. Under normal working conditions, most electrical appliances do not release sparks, but the operation of certain devices is normally accompanied by sparks.
Sparking introduces a hazard foremost at places where, in the zone of their generation, explosive concentrations of gas, vapour or dust might arise. Consequently, equipment normally releasing sparks during operation is permitted to be set up only at places where the sparks cannot give rise to fire. On its own, the energy content of sparks is insufficient for the ignition of the materials in the environment or to initiate an explosion.
If an electrical system has no perfect metallic contact between the structural units through which the current flows, high contact resistance will occur at this spot. This phenomenon is in most cases due to the faulty construction of joints or to unworkmanlike installations. The disengagement of joints during operation and natural wear may also be cause for high contact resistance. A large portion of the current flowing through places with increased resistance will transform to heat energy. If this energy cannot be dissipated sufficiently (and the reason cannot be eliminated), the extremely large increase of temperature might lead to a fire condition that endangers the surrounding.
If the devices work on the basis of the induction concept (engines, dynamos, transformers, relays, etc.) and are not properly calculated, eddy currents may arise during operation. Due to the eddy currents, the structural units (coils and their iron cores) might warm up, which might lead to the ignition of insulating materials and the burning of the equipment. Eddy currents might arisewith these harmful consequencesalso in the metal structural units around high-voltage equipment.
Electrostatic charging is a process in the course of which any material, originally with electric neutrality (and independent of any electric circuit) becomes charged positively or negatively. This may occur in one of three ways:
1. charging with separation, such that charges of subtractive polarity accumulate on two bodies simultaneously
2. charging with passing, such that the charges passing away leave charges of opposed polarity signs behind
3. charging by taking up, such that the body receives charges from outside.
These three ways of charging may arise from various physical processes, including separation after contact, splitting, cutting, pulverizing, moving, rubbing, flowing of powders and fluids in pipe, hitting, change of pressure, change of state, photoionization, heat ionization, electrostatical distribution or high-voltage discharge.
Electrostatic charging may occur both on conducting bodies and insulating bodies as a result of any of the processes mentioned above, but in most cases the mechanical processes are responsible for the accumulation of the unwanted charges.
From the large number of the harmful effects and risks due to electrostatic charging and the spark discharge resulting from it, two risks can be mentioned in particular: endangering of electronic equipment (for example, computer for process control) and the hazard of fire and explosion.
Electronic equipment is endangered first of all if the discharge energy from the charging is sufficiently high to cause destruction of the input of any semi-conductive part. The development of electronic units in the last decade has been followed by the rapid increase of this risk.
The development of fire or explosion risk necessitates the coincidence in space and time of two conditions: the presence of any combustible medium and the discharge with ability for ignition. This hazard occurs mainly in the chemical industry. It may be estimated on the basis of the so-called spark sensitivity of hazardous materials (minimum ignition energy) and depends on the extent of charging.
It is an essential task to reduce these risks, namely, the large variety of consequences that extend from technological troubles to catastrophes with fatal accidents. There are two means of protecting against the consequences of electrostatic charging:
1. preventing the initiation of the charging process (it is evident, but usually very difficult to realize)
2. restricting the accumulation of charges to prevent the occurrence of dangerous discharges (or any other risk).
Lightning is an atmospherical electric phenomenon in nature and may be considered an ignition source. The static charging produced in the clouds is equalized towards the earth (lightning stroke) and is accompanied by a high-energy discharge. The combustible materials at the place of lightning stroke and its surroundings might ignite and burn off. At some strokes of lightning, very strong impulses are generated, and the energy is equalized in several steps. In other cases, long-lasting currents start to flow, sometimes reaching the order of magnitude of 10 A.
Technical practice is steadily coupled with friction. During mechanical operation, frictional heat is developed, and if heat loss is restricted to such an extent that heat accumulates in the system, its temperature may increase to a value that is dangerous for the environment, and fire may occur.
Friction sparks normally occur at metal technological operations because of heavy friction (grinding, chipping, cutting, hitting) or because of metal objects or tools dropping or falling on to a hard floor or during grinding operations because of metal contaminations within the material under grinding impact. The temperature of the spark generated is normally higher than the ignition temperature of the conventional combustible materials (such as for sparks from steel, 1,400-1,500 °C; sparks from copper-nickel alloys, 300-400 °C); however, the ignition ability depends on the whole heat content and the lowest ignition energy of the material and substance to be ignited, respectively. It has been proven in practice that friction sparks mean real fire risk in air spaces where combustible gases, vapours and dusts are present in dangerous concentrations. Thus, under these circumstances the use of materials that easily produce sparks, as well as processes with mechanical sparking, should be avoided. In these cases, safety is provided by tools that do not spark, i.e., made from wood, leather or plastic materials, or by using tools of copper and bronze alloys that produce sparks of low energy.
In practice, the surfaces of equipment and devices may warm up to a dangerous extent either normally or due to malfunction. Ovens, furnaces, drying devices, waste-gas outlets, vapour pipes, etc. often cause fires in explosive air spaces. Furthermore, their hot surfaces may ignite combustible materials coming close to them or by coming in contact. For prevention, safe distances should be observed, and regular supervision and maintenance will reduce the probability of the occurrence of dangerous overheating.
The presence of combustible material in combustible systems represents an obvious condition of burning. Burning phenomena and the phases of the burning process fundamentally depend on the physical and chemical properties of the material involved. Therefore, it seems reasonable to make a survey of the flammability of the various materials and products with respect to their character and properties. For this section, the ordering principle for the grouping of materials is governed by technical aspects rather than by theoretical conceptions (NFPA 1991).
Wood is one of the most common materials in the human milieu. Houses, building structures, furniture and consumer goods are made of wood, and it is also widely used for products such as paper as well as in the chemical industry.
Wood and wood products are combustible, and when in contact with high-temperature surfaces and exposed to heat radiation, open flames or any other ignition source, will carbonize, glow, ignite or burn, depending upon the condition of combustion. To widen the field of their application, the improvement of their combustion properties is required. In order to make structural units produced from wood less combustible, they are typically treated with fire-retardant agents (e.g., saturated, impregnated, provided with surface coating).
The most essential characteristic of combustibility of the various kinds of wood is the ignition temperature. Its value strongly depends on some of the properties of wood and the test conditions of determination, namely, the wood sample’s density, humidity, size and shape, as well as the ignition source, time of exposure, intensity of exposure and the atmosphere during testing. It is interesting to note that the ignition temperature as determined by various test methods differs. Experience has shown that the inclination of clean and dry wood products to ignition is extremely low, but several fire cases caused by spontaneous ignition have been known to occur from storing dusty and oily waste wood in rooms with imperfect ventilation. It has been proven empirically that higher moisture content increases the ignition temperature and reduces the burning speed of wood. The thermal decomposition of wood is a complicated process, but its phases may clearly be observed as follows:
· The thermal decomposition with mass loss starts already in the range 120-200 °C; moisture content releases and the non-combustible degradates occur in the combustion space.
· At 200-280 °C, mainly endothermic reactions occur while the heat energy of ignition source is taken up.
· At 280-500 °C, the exothermic reactions of decomposition products are steadily accelerating as the primary process, while carbonization phenomena may be observed. In this temperature range, sustaining combustion has already developed. After ignition, burning is not steady in time because of the good heat-insulating ability of its carbonized layers. Consequently, the warming up of the deeper layers is limited and time consuming. When the surfacing of the combustible decomposition products is accelerated, burning will be complete.
· At temperatures exceeding 500 °C, the wood char forms residues. During its additional glowing, ash containing solid, inorganic materials is produced, and the process has come to an end.
The majority of the textiles produced from fibrous materials that are found in the close surrounding of people is combustible. Clothing, furniture and the built environment partly or totally consists of textiles. The hazard which they present exists during their production, processing and storing as well as during their wearing.
The basic materials of textiles are both natural and artificial; synthetic fibres are used either alone or mixed with natural fibres. The chemical composition of the natural fibres of plant origin (cotton, hemp, jute, flax) is cellulose, which is combustible, and these fibres have a relatively high ignition temperature (approx. 400 °C). It is an advantageous feature of their burning that when brought to high temperature they carbonize but do not melt. This is especially advantageous for the medical treatments of burn casualties.
The fire hazardous properties of fibres of protein base of animal origin (wool, silk, hair) are even more favourable than those of fibres of plant origin, because a higher temperature is required for their ignition (500-600 °C), and under the same conditions, their burning is less intensive.
The plastics industry, utilizing several extremely good mechanical properties of polymer products, has also gained prominence in the textile industry. Among the properties of acrylic, polyester and the thermoplastic synthetic fibres (nylon, polypropylene, polyethylene), those associated with burning are the least advantageous. Most of them, in spite of their high ignition temperature (approx. 400-600 °C), melt when exposed to heat, easily ignite, burn intensively, drop or melt when burning and release considerably high quantities of smoke and toxic gases. These burning properties may be improved by addition of natural fibres, producing so-called textiles with mixed fibres. Further treatment is accomplished with flame-retardant agents. For the manufacture of textiles for industrial purposes and heat-protective clothing, inorganic, non-combustible fibre products (including glass and metal fibres) are already used in large quantities.
The most important fire hazard characteristics of textiles are the properties connected with ignitability, flame spread, heat generation and the toxic combustion products. Special testing methods have been developed for their determination. The test results obtained influence the fields of application for these products (tents and flats, furniture, vehicle upholstery, clothes, carpets, curtains, special protective clothing against heat and weather), as well as the stipulations to restrict the risks in their use. An essential task of industrial researchers is to develop textiles that sustain high temperature, treated with fire-retardant agents, (heavily combustible, with long ignition time, low flame spread rate, low speed of heat release) and produce small amounts of toxic combustion products, as well as to improve the unfavourable effect of fire accidents due to the burning of such materials.
In the presence of ignition sources, combustible and flammable liquids are potential sources of risk. First, the closed or open vapour space above such liquids provides a fire and explosion hazard. Combustion, and more frequently explosion, might occur if the material is present in the vapour-air mixture in suitable concentration. From this it follows that burning and explosion in the zone of combustible and flammable liquids may be prevented if:
· the ignition sources, air, and oxygen are excluded; or
· instead of oxygen, inert gas is present in the surrounding; or
· the liquid is stored in a closed vessel or system (see figure 41.3); or
· by proper ventilation, the development of the dangerous vapour concentration is prevented.
In practice, a large number of material characteristics are known in connection with the dangerous nature of combustible and flammable liquids. These are closed-cup and open-cup flash points, boiling point, ignition temperature, rate of evaporation, upper and lower limits of the concentration for combustibility (flammable or explosive limits), the relative density of vapours compared to air and energy required for the ignition of vapours. These factors provide full information about the sensitivity for ignition of various liquids.
Nearly all over the world the flash point, a parameter determined by standard test under atmospherical conditions, is used as the basis to group the liquids (and materials behaving as liquids at relatively low temperatures) into categories of risk. The safety requirements for storage of liquids, their handling, the technological processes, and the electrical equipment to be set up in their zone should be elaborated for each category of flammability and combustibility. The zones of risk around the technological equipment should also be identified for each category. Experience has shown that fire and explosion might occurdepending on the temperature and pressure of the systemwithin the range of concentration between the two flammable limits.
Although all materialsunder a specific temperature and pressuremay become gases, the materials considered gaseous in practice are those that are in a gas state at normal temperature (~20 °C) and normal atmospheric pressure (~100 kPa).
In respect to fire and explosion hazards, gases may be ranked in two main groups: combustible and non-combustible gases. According to the definition accepted in practice, combustible gases are those that burn in air with normal oxygen concentration, provided that the conditions required for burning exist. Ignition only occurs above a certain temperature, with the necessary ignition temperature, and within a given range of concentration.
Non-combustible gases are those that do not burn either in oxygen or in air with any concentration of air. A portion of these gases support combustion (e.g., oxygen), while the other portion inhibit burning. The non-combustible gases not supporting burning are called inert gases (nitrogen, noble gases, carbon dioxide, etc.).
In order to achieve economic efficiency, the gases stored and transported in containers or transporting vessels are typically in compressed, liquefied, or cooled-condensated (cryogenic) state. Basically, there are two hazardous situations in connection with gases: when they are in containers and when they are released from their containers.
For compressed gases in storage containers, external heat might considerably increase the pressure within the container, and the extreme overpressure might lead to explosion. Gaseous storage containers will typically include a vapour phase and a liquid phase. Because of changes in pressure and temperature, the extension of the liquid phase gives rise to the further compression of vapour space, while the vapour pressure of the liquid increases in proportion with the increase of temperature. As a result of these processes, critically dangerous pressure may be produced. Storage containers are generally required to contain the application of overpressure relief devices. These are capable of mitigating a hazardous situation due to higher temperatures.
If the storage vessels are insufficiently sealed, or damaged, the gas will flow out to the free air space, mix with air and depending on its quantity and the way of its flowing, may cause the formation of a large, explosive air space. The air around a leaking storage vessel can be unsuitable for breathing and may be dangerous for people nearby, partly due to the toxic effect of some gases and partly due to the diluted concentration of oxygen.
Bearing in mind the potential fire hazard due to gases and the need for safe operation, one must get detailed knowledge of the following features of gases either stored or used, especially for industrial consumers: the chemical and physical properties of gases, ignition temperature, the lower and upper limits of concentration for flammability, the hazardous parameters of the gas in the container, the risk factors of the hazardous situation caused by the gases released into the open air, the extent of the necessary safety zones and the special measures to be taken in case of a possible emergency situation connected with fire-fighting.
Knowledge of the hazardous parameters of chemicals is one of the basic conditions of safe working. The preventive measures and requirements for protection against fire may be elaborated only if the physical and chemical properties connected with fire hazard are taken into consideration. Of these properties, the most important ones are the following: combustibility; ignitability; ability to react with other materials, water or air; inclination to corrosion; toxicity; and radioactivity.
Information on the properties of chemicals can be obtained from the technical data sheets issued by manufacturers and from the manuals and handbooks containing the data of hazardous chemicals. These provide users with information not only about the general technical features of materials, but also about the actual values of hazard parameters (decomposition temperature, ignition temperature, limit concentrations of combustion, etc.), their special behaviour, requirements for storage and fire-fighting, as well as recommendations for first aid and medical therapy.
The toxicity of chemicals, as potential fire hazard, may act in two ways. First, the high toxicity of certain chemicals themselves, may be hazardous in a fire. Second, their presence within the fire zone may effectively restrict fire-fighting operations.
The oxidizing agents (nitrates, chlorates, inorganic peroxides, permanganates, etc.), even if they themselves are non-combustible, largely contribute to the ignition of combustible materials and to their intensive, occasionally explosive burning.
The group of unstable materials includes the chemicals (acetaldehyde, ethylene oxide, organic peroxides, hydrogen cyanide, vinyl chloride) which polymerize or decompose in violent exothermic reactions spontaneously or very easily.
The materials sensitive to water and air are extremely dangerous. These materials (oxides, hydroxides, hydrides, anhydrides, alkali metals, phosphorus, etc.) interact with the water and air that are always present in the normal atmosphere, and start reactions accompanied by very high heat generation. If they are combustible materials, they will come to spontaneous ignition. However, the combustible components that initiate the burning may possibly explode and spread to the combustible materials in the surrounding area.
The majority of corrosive materials (inorganic acidssulphuric acid, nitric acid, perchloric acid, etc.and halogens fluorine, chlorine, bromine, iodine) are strong oxidizing agents, but at the same time they have very strong destructive effects on living tissues, and therefore special measures have to be taken for fire-fighting.
The dangerous characteristic of radioactive elements and compounds is increased by the fact that the radiation emitted by them may be harmful in several ways, besides that such materials may be fire hazards themselves. If in a fire the structural containment of the radioactive objects involved becomes damaged, γ-radiating materials might be released. They can have a very strong ionizing effect, and are capable of the fatal destruction of living organisms. Nuclear accidents can be accompanied by fires, the decomposition products of which bind radioactive (α-and β-radiating) contaminants by adsorption. These may cause permanent injuries to the persons taking part in rescue operations if they penetrate into their bodies. Such materials are extremely dangerous, because the persons affected do not perceive any radiation by their sensing organs, and their general state of health does not seem to be any worse. It is obvious that if radioactive materials burn, the radioactivity of the site, the decomposition products and the water used for fire-fighting should be kept under constant observation by means of radioactive signalling devices. The knowledge of these factors has to be taken into account for the strategy of intervention and all additional operations. The buildings for handling and storing radioactive materials as well as for their technological use need to be built of non-combustible materials of high fire resistance. At the same time, high-quality, automatic equipment for detecting, signalling and extinguishing a fire should be provided.
Explosive materials are used for many military and industrial purposes. These are chemicals and mixtures which, when affected by strong mechanical force (hitting, shock, friction) or starting ignition, suddenly transform to gases of large volume through an extremely rapid oxidizing reaction (e.g., 1,000-10,000 m/s). The volume of these gases is the multiple of the volume of the explosive material already exploded, and they will exert very high pressure on the surroundings. During an explosion, high temperatures can arise (2,500-4,000 °C) that promote the ignition of the combustible materials in the zone of explosion.
Manufacture, transport and storage of the various explosive materials are governed by rigorous requirements. An example is NFPA 495, Explosive Materials Code.
Besides the explosive materials used for military and industrial purposes, the inductive blasting materials and pyrotechnical products are also treated as hazards. In general, mixtures of explosive materials are often used (picric acid, nitroglycerin, hexogene, etc.), but mixtures of materials capable of explosion are also in use (black powder, dynamite, ammonium nitrate, etc.). In the course of acts of terrorism, plastic materials have become well-known, and are, in essence, mixtures of brisant and plasticizing materials (various waxes, Vaseline, etc.).
For explosive materials, the most effective method of protection against fire is the exclusion of ignition sources from the surroundings. Several explosive materials are sensitive to water or various organic materials with an ability to oxidate. For these materials, the requirements for the conditions of storage and the rules for storing in the same place together with other materials should be carefully considered.
It is known from practice that nearly all the metals, under certain conditions, are capable of burning in atmospheric air. Steel and aluminium in large structural thickness, on the basis of their behaviour in fire, are clearly evaluated as non-combustible. However, the dusts of aluminium, iron in fine distribution and metal cottons from thin metal fibres can easily be ignited and thus burn intensively. The alkali metals (lithium, sodium, potassium), the alkaline-earth metals (calcium, magnesium, zinc), zirconium, hafnium, titanium, etc. ignite extremely easily in the form of a powder, filings or thin bands. Some metals have such a high sensitivity that they are stored separately from air, in inert gas atmospheres or under a liquid that is neutral for the metals.
The combustible metals and those that are conditioned to burn produce extremely violent burning reactions that are high-speed oxidation processes releasing considerably higher quantities of heat than observed from the burning of combustible and flammable liquids. The burning of metal dust in the case of settled powder, following the preliminary phase of glowing-ignition, might grow to rapid burning. With stirred-up dusts and clouds of dusts that might result, severe explosions can occur. The burning activity and affinity for oxygen of some metals (such as magnesium) are so high that after being ignited they will continue to burn in certain media (e.g., nitrogen, carbon dioxide, steam atmosphere) that are used for extinguishing fires derived from combustible solid materials and liquids.
Extinguishing metal fires presents a special task for fire-fighters. The choice of the proper extinguishing agent and the process in which it is applied are of great importance.
Fires of metals may be controlled with very early detection, the rapid and appropriate action of fire-fighters using the most effective method and, if possible, removal of metals and any other combustible materials from the zone of burning or at least a reduction of their quantities.
Special attention should be given to the protection against radiation when radioactive metals (plutonium, uranium) burn. Preventive measures have to be taken to avoid the penetration of toxic decomposition products into living organisms. For example, alkali metals, because of their ability to react violently with water may be extinguished with dry fire-extinguishing powders only. Burning of magnesium cannot be extinguished with water, carbon dioxide, halons or nitrogen with good success, and more important, if these agents are used in fire-fighting, the hazardous situation will become even more severe. The only agents that can be applied successfully are the noble gases or in some cases boron trifluoride.
Plastics are macromolecular organic compounds produced synthetically or by modification of natural materials. The structure and shape of these macromolecular materials, produced by polymerizational, polyadditional or polycondensational reactions, will strongly influence their properties. The chain molecules of thermoplastics (polyamides, polycarbonates, polyesters, polystyrene, polyvinyl chloride, polymethyl-metacrylate, etc.) are linear or branched, the elastomers (neoprene, polysulphides, isoprene, etc.) are lightly cross-linked, while thermosetting plastics (duroplastics: polyalkydes, epoxy resins, polyurethanes, etc.) are densely cross-linked.
Natural caoutchouc is used as raw material by the rubber industry, and after being vulcanized, rubber is produced. The artificial caoutchoucs, the structure of which is similar to that of natural chaoutchouc, are polymers and co-polymers of butadiene.
The range of products from plastics and rubber used in nearly all fields of everyday life is steadily widening. Use of the large variety and excellent technical properties of this group of materials results in items such as various building structures, furniture, clothes, commodities, parts for vehicles and machines.
Typically, as organic materials, plastics and rubber also are considered to be combustible materials. For the description of their fire behaviour, a number of parameters are used that can be tested by special methods. With the knowledge of these parameters, one can allocate the fields of their application (determined, pointed out, set), and the fire safety provisions can be elaborated. These parameters are combustibility, ignitability, ability to develop smoke, inclination to produce toxic gases and burning dripping.
In many cases the ignition temperature of plastics is higher than that of wood or any other materials, but in most cases they ignite more easily, and their burning takes place more rapidly and with higher intensity. Fires of plastics are often accompanied by the unpleasant phenomena of large quantities of dense smoke being released that can strongly restrict visibility and develop various toxic gases (hydrochloric acid, phosgene, carbon monoxide, hydrogen cyanide, nitrous gases, etc.). Thermoplastic materials melt during burning, then flow and depending on their location (if mounted in or on a ceiling) produce drops which remain in the burning area and might ignite the combustible materials underneath.
The improvement of burning properties represents a complex problem and a “key issue” of plastics chemistry. Fire-retardant agents inhibit combustibility, ignition will be slower, the rate of combustion will fall, and flame propagation will slow down. At the same time, the quantity and optical density of smoke will be higher and the gas mixture produced will be more toxic.
With regard to physical state, dusts belong to the solid materials, but their physical and chemical properties differ from those of those same materials in compact form. It is known that industrial accidents and catastrophes are caused by dust explosions. Materials that are non-combustible in their usual form, such as metals, may initiate an explosion in the form of dust mixed with air when affected by any ignition source, even of low energy. The hazard of an explosion also exists with dusts of combustible materials.
Dust can be an explosion hazard not only when floating in the air, but also when settled. In layers of dust, heat may accumulate, and slow burning may develop in the inside as a result of the increased ability of particles to react and their lower thermal conductivity. Then the dust may be stirred up by flashes, and the possibility of dust explosion will grow.
Floating particles in fine distribution present a more severe hazard. Similar to the explosion properties of combustible gases and vapours, dusts also have a special range of air-dust concentration in which an explosion may occur. The lower and upper limit values of explosion concentration and the width of concentration range depend on the size and distribution of particles. If the dust concentration exceeds the highest concentration leading to an explosion, a portion of the dust is not destroyed by fire and absorbs heat, and as a consequence the explosion pressure developed remains below the maximum. The moisture content of air also influences the occurrence of an explosion. At higher humidity, the ignition temperature of the cloud of dust will increase in proportion with the heat quantity necessary for the evaporation of humidity. If an inert foreign dust is mixed in a cloud of dust, the explosivity of the dust-air mixture will be reduced. The effect will be the same if inert gases are mixed in the mixture of air and dust, because the oxygen concentration necessary for burning will be lower.
Experience has shown that all the ignition sources, even of minimum ignition energy, are capable of igniting dust clouds (open flames, electric arc, mechanical or electrostatic spark, hot surfaces, etc.). According to test results obtained in laboratory, the energy demand for ignition of dust clouds is 20 to 40 times higher than in the case of mixtures of combustible vapour and air.
The factors that influence the explosion hazard for settled dusts are the physical and thermal engineering properties of the dust layer, the glowing temperature of the dust and the ignition properties of the decomposition products released by the dust layer.
History tells us that fires were useful for heating and cooking but caused major damage in many cities. Many houses, major buildings and sometimes whole cities were destroyed by fire.
One of the first fire prevention measures was a requirement to extinguish all fires before nightfall. For example, in 872 in Oxford, England, authorities ordered a curfew bell to be rung at sunset to remind citizens to extinguish all indoor fires for the night (Bugbee 1978). Indeed, the word curfew is derived from the French couvre feu which literally means “cover fire”.
The cause of fires is often a result of human action bringing fuel and an ignition source together (e.g., waste paper stored next to heating equipment or volatile flammable liquids being used near open flames).
Fires require fuel, an ignition source and some mechanism to bring the fuel and ignition source together in the presence of air or some other oxidizer. If strategies can be developed to reduce fuel loads, eliminate ignition sources or prevent the fuel/ignition interaction, then fire loss and human death and injury can be reduced.
In recent years, there has been increasing emphasis on fire prevention as one of the most cost-effective measures in dealing with the fire problem. It is often easier (and cheaper) to prevent fires starting than to control or extinguish them once they have started.
This is illustrated in the Fire Safety Concepts Tree (NFPA 1991; 1995a) developed by the NFPA in the United States. This systematic approach to fire safety problems shows that objectives, such as reducing fire deaths in the workplace, can be achieved by preventing fire ignition or managing the impact of fire.
Fire prevention inevitably means changing human behaviour. This requires fire safety education, supported by management, using the latest training manuals, standards and other educational materials. In many countries such strategies are reinforced by law, requiring companies to meet legislated fire prevention objectives as part of their occupational health and safety commitment to their workers.
Fire safety education will be discussed in the next section. However, there is now clear evidence in commerce and industry of the important role of fire prevention. Great use is being made internationally of the following sources: Lees, Loss Prevention in the Process Industries, Volumes 1 and 2 (1980); NFPA 1Fire Prevention Code (1992); The Management of Health and Safety at Work Regulations (ECD 1992); and Fire Protection Handbook of the NFPA (Cote 1991). These are supplemented by many regulations, standards and training materials developed by national governments, businesses and insurance companies to minimize losses of life and property.
For a fire safety education programme to be effective, there must be a major corporate policy commitment to safety and the development of an effective plan that has the following steps: (a) Planning phaseestablishment of goals and objectives; (b) Design and implementation phase; and (c) Program evaluation phasemonitoring effectiveness.
Gratton (1991), in an important article on fire safety education, defined the differences between goals, objectives and implementation practices or strategies. Goals are general statements of intent that in the workplace may be said “to reduce the number of fires and thus reduce death and injury among workers, and the financial impact on companies”.
The people and financial parts of the overall goal are not incompatible. Modern risk management practice has demonstrated that improvements in safety for workers through effective loss control practices can be financially rewarding to the company and have a community benefit.
These goals need to be translated into specific fire safety objectives for particular companies and their workforce. These objectives, which must be measurable, usually include statements such as:
· reduce industrial accidents and resulting fires
· reduce fire deaths and injuries
· reduce company property damage.
For many companies, there may be additional objectives such as reduction in business interruption costs or minimization of legal liability exposure.
The tendency among some companies is to assume that compliance with local building codes and standards is sufficient to ensure that their fire safety objectives are met. However, such codes tend to concentrate on life safety, assuming fires will occur.
Modern fire safety management understands that absolute safety is not a realistic goal but sets measurable performance objectives to:
· minimize fire incidents through effective fire prevention
· provide effective means of limiting the size and consequence of fire incidents through effective emergency equipment and procedures
· use insurance to safeguard against large, unforeseen fires, particularly those arising from natural hazards such as earthquakes and bushfires.
The design and implementation of fire safety education programmes for fire prevention are critically dependent upon development of well-planned strategies and effective management and motivation of people. There must be strong and absolute corporate support for full implementation of a fire safety programme for it to be successful.
The range of strategies have been identified by Koffel (1993) and in NFPA’s Industrial Fire Hazards Handbook (Linville 1990). They include:
· promoting the company policy and strategies on fire safety to all company employees
· identifying all potential fire scenarios and implementing appropriate risk reduction actions
· monitoring all local codes and standards that define the standard of care in a particular industry
· operating a loss administration programme to measure all losses for comparison with performance objectives
· training of all employees in proper fire prevention and emergency response techniques.
Some international examples of implementation strategies include:
· courses operated by the Fire Protection Association (FPA) in the United Kingdom that lead to the European Diploma in Fire Prevention (Welch 1993)
· the creation of SweRisk, a subsidiary company of the Swedish Fire Protection Association, to assist companies in undertaking risk assessments and in developing fire prevention programmes (Jernberg 1993)
· massive citizen and worker involvement in fire prevention in Japan to standards developed by the Japan Fire Defence Agency (Hunter 1991)
· fire safety training in the United States through the use of the Firesafety Educator’s Handbook (NFPA 1983) and the Public Fire Education Manual (Osterhoust 1990).
It is critically important to measure the effectiveness of fire safety education programmes. This measurement provides the motivation for further programme financing, development and adjustment where necessary.
The best example of monitoring and success of fire safety education is probably in the United States. The Learn Not to Burn® programme, aimed at educating the young people in America on the dangers of fire, has been coordinated by the Public Education Division of the NFPA. Monitoring and analysis in 1990 identified a total of 194 lives saved as a result of proper life safety actions learned in fire safety education programmes. Some 30% of these lives saved can be directly attributed to the Learn Not to Burn® programme.
The introduction of residential smoke detectors and fire safety education programmes in the United States have also been suggested as the primary reasons for the reduction in home fire deaths in that country, from 6,015 in 1978 to 4,050 in 1990 (NFPA 1991).
In the industrial field, Lees (1980) is an international authority. He indicated that in many industries today, the potential for very large loss of life, serious injuries or property damage is far greater than in the past. Large fires, explosions and toxic releases can result, particularly in the petrochemical and nuclear industries.
Fire prevention is therefore the key to minimizing fire ignition. Modern industrial plants can achieve good fire safety records through well-managed programmes of:
· housekeeping and safety inspections
· employee fire prevention training
· equipment maintenance and repair
· security and arson prevention (Blye and Bacon 1991).
A useful guide, on the importance of housekeeping for fire prevention in commercial and industrial premises is given by Higgins (1991) in the NFPA’s Fire Protection Handbook.
The value of good housekeeping in minimizing combustible loads and in preventing exposure of ignition sources is recognized in modern computer tools used for assessing fire risks in industrial premises. The FREM (Fire Risk Evaluation Method) software in Australia identifies housekeeping as a key fire safety factor (Keith 1994).
Heat utilization equipment in commerce and industry includes ovens, furnaces, kilns, dehydrators, dryers and quench tanks.
In the NFPA’s Industrial Fire Hazards Handbook, Simmons (1990) identified the fire problems with heating equipment to be:
1. the possibility of igniting combustible materials stored nearby
2. fuel hazards resulting from unburned fuel or incomplete combustion
3. overheating leading to equipment failure
4. ignition of combustible solvents, solid materials or other products being processed.
These fire problems can be overcome through a combination of good housekeeping, proper controls and interlocks, operator training and testing, and cleaning and maintenance in an effective fire prevention programme.
Detailed recommendations for the various categories of heat utilization equipment are set out in the NFPA’s Fire Protection Handbook (Cote 1991).These are summarized below.
Fires and explosions in ovens and furnaces typically result from the fuel used, from volatile substances provided by the material in the oven or by a combination of both. Many of these ovens or furnaces operate at 500 to 1,000 °C, which is well above the ignition temperature of most materials.
Ovens and furnaces require a range of controls and interlocks to ensure that unburned fuel gases or products of incomplete combustion cannot accumulate and be ignited. Typically, these hazards develop while firing up or during shut-down operations. Therefore, special training is required to ensure that operators always follow safety procedures.
Non-combustible building construction, separation of other equipment and combustible materials and some form of automatic fire suppression are usually essential elements of a fire safety system to prevent spread should a fire start.
Kilns are used to dry timber (Lataille 1990) and to process or “fire” clay products (Hrbacek 1984).
Again, this high-temperature equipment represents a hazard to its surroundings. Proper separation design and good housekeeping are essential to prevent fire.
Lumber kilns used for drying timber are additionally hazardous because the timber itself is a high fire load and is often heated close to its ignition temperature. It is essential that kilns be cleaned regularly to prevent a build-up of small pieces of wood and sawdust so that this does not come in contact with the heating equipment. Kilns made of fire-resistive construction material, fitted with automatic sprinklers and provided with high-quality ventilation/air circulation systems are preferred.
This equipment is used to reduce the moisture content of agricultural products such as milk, eggs, grains, seeds and hay. The dryers may be direct-fired, in which case the productions of combustion contact the material being dried, or they may be indirect-fired. In each case, controls are required to shut off the heat supply in the event of excessive temperature or fire in the dryer, exhaust system or conveyor system or failure of air circulation fans. Again, adequate cleaning to prevent build-up of products that could ignite is required.
The general principles of fire safety of quench tanks are identified by Ostrowski (1991) and Watts (1990).
The process of quenching, or controlled cooling, occurs when a heated metal item is immersed in a tank of quenching oil. The process is undertaken to harden or temper the material through metallurgical change.
Most quenching oils are mineral oils which are combustible. They must be chosen carefully for each application to ensure that the ignition temperature of the oil is above the operating temperature of the tank as the hot metal pieces are immersed.
It is critical that the oil does not overflow the sides of the tank. Therefore, liquid level controls and appropriate drains are essential.
Partial immersion of hot items is the most common cause of quench tank fires. This can be prevented by appropriate material transfer or conveyor arrangements.
Likewise, appropriate controls must be provided to avoid excessive oil temperatures and entry of water into the tank that can result in boil-over and major fire in and around the tank.
Specific automatic fire extinguishing systems such as carbon dioxide or dry chemical are often used to protect the tank surface. Overhead, automatic sprinkler protection of the building is desirable. In some cases, special protection of operators who need to work close to the tank is also required. Often, water spray systems are provided for exposure protection for workers.
Above all, proper training of workers in emergency response, including use of portable fire extinguishers, is essential.
Operations to chemically change the nature of materials have often been the source of major catastrophes, causing severe plant damage and death and injury to workers and surrounding communities. Risks to life and property from incidents in chemical process plants may come from fires, explosions or toxic chemical releases. The energy of destruction often comes from uncontrolled chemical reaction of process materials, combustion of fuels leading to pressure waves or high levels of radiation and flying missiles that can cause damage at large distances.
The first stage of design is to understand the chemical processes involved and their potential for energy release. Lees (1980) in his Loss Prevention in the Process Industries sets out in detail the steps required to be undertaken, which include:
· proper process design
· study of failure mechanisms and reliability
· hazard identification and safety audits
· hazard assessmentcause/consequences.
The assessment of the degrees of hazard must examine:
· potential emission and dispersal of chemicals, particularly toxic and contaminating substances
· effects of fire radiation and dispersal of combustion products
· results of explosions, particularly pressure shock waves that can destroy other plants and buildings.
More details of process hazards and their control are given in Plant guidelines for technical management of chemical process safety (AIChE 1993); Sax’s Dangerous Properties of Industrial Materials (Lewis 1979); and the NFPA’s Industrial Fire Hazards Handbook (Linville 1990).
Once the hazards and consequences of fire, explosion and toxic releases have been identified, siting of chemical process plants can be undertaken.
Again, Lees (1980) and Bradford (1991) provided guidelines on plant siting. Plants must be separated from surrounding communities sufficiently to ensure that those communities cannot be affected by an industrial accident. The technique of quantitative risk assessment (QRA) to determine separation distances is widely used and legislated for in the design of chemical process plants.
The disaster in Bhopal, India, in 1984 demonstrated the consequences of locating a chemical plant too close to a community: over 1,000 people were killed by toxic chemicals in an industrial accident.
Provision of separating space around chemical plants also allows ready access for fire-fighting from all sides, regardless of wind direction.
Chemical plants must provide exposure protection in the form of explosion-resistant control rooms, worker refuges and fire-fighting equipment to ensure that workers are protected and that effective fire-fighting can be undertaken after an incident.
Spills of flammable or hazardous materials should be kept small by appropriate process design, fail-safe valves and appropriate detection/control equipment. However, if large spills occur, they should be confined to areas surrounded by walls, sometimes of earth, where they can burn harmlessly if ignited.
Fires in drainage systems are common, and special attention must be paid to drains and sewerage systems.
Equipment that transfers heat from a hot fluid to a cooler one can be a source of fire in chemical plants. Excessive localized temperatures can cause decomposition and burn out of many materials. This may sometimes cause rupture of the heat-transfer equipment and transfer of one fluid into another, causing an unwanted violent reaction.
High levels of inspection and maintenance, including cleaning of heat transfer equipment, is essential to safe operation.
Reactors are the vessels in which the desired chemical processes are undertaken. They can be of a continuous or batch type but require special design attention. Vessels must be designed to withstand pressures that might result from explosions or uncontrolled reactions or alternatively must be provided with appropriate pressure-relief devices and sometimes emergency venting.
Safety measures for chemical reactors include:
· appropriate instrumentation and controls to detect potential incidents, including redundant circuitry
· high quality cleaning, inspection and maintenance of the equipment and the safety controls
· adequate training of operators in control and emergency response
· appropriate fire suppression equipment and fire-fighting personnel.
The Factory Mutual Engineering Corporation’s (FM) Loss Prevention Data Sheet (1977) shows that nearly 10% of losses in industrial properties are due to incidents involving cutting and welding of materials, generally metals. It is clear that the high temperatures required to melt the metals during these operations can start fires, as can the sparks generated in many of these processes.
The FM Data Sheet (1977) indicates that the materials most frequently involved in fires due to welding and cutting are flammable liquids, oily deposits, combustible dusts and wood. The types of industrial areas where accidents are most likely are storage areas, building construction sites, facilities undergoing repair or alteration and waste disposal systems.
Sparks from cutting and welding can often travel up to 10 m and lodge in combustible materials where smouldering and later flaming fires can occur.
Arc welding and arc cutting are examples of processes involving electricity to provide the arc that is the heat source for melting and joining metals. Flashes of sparks are common, and protection of workers from electrocution, spark flashes and intense arc radiation is required.
This process uses the heat of combustion of the fuel gas and oxygen to generate flames of high temperature that melt the metals being joined or cut. Manz (1991) indicated that acetylene is the most widely used fuel gas because of its high flame temperature of about 3,000 °C.
The presence of a fuel and oxygen at high pressure makes for an increased hazard, as is leakage of these gases from their storage cylinders. It is important to remember that many materials that do not burn, or only burn slowly in air, burn violently in pure oxygen.
Good safety practices are identified by Manz (1991) in the NFPA Fire Protection Handbook.
These safeguards and precautions include:
· proper design, installation and maintenance of welding and cutting equipment, particularly storage and leak testing of fuel and oxygen cylinders
· proper preparation of work areas to remove all chance of accidental ignition of surrounding combustibles
· strict management control over all welding and cutting processes
· training of all operators in safe practices
· proper fire-resistant clothing and eye protection for operators and nearby workers
· adequate ventilation to prevent exposure of operators or nearby workers to noxious gases and fumes.
Special precautions are required when welding or cutting tanks or other vessels that have held flammable materials. A useful guide is the American Welding Society’s Recommended Safe Practices for the Preparation for Welding and Cutting of Containers that have held Hazardous Substances (1988).
For building works and alterations, a UK publication, the Loss Prevention Council’s Fire Prevention on Construction Sites (1992) is useful. It contains a sample hot-work permit to control cutting and welding operations. This would be useful for management in any plant or industrial site. A similar sample permit is provided in the FM Data Sheet on cutting and welding (1977).
Lightning is a frequent cause of fires and deaths of people in many countries in the world. For example, each year some 240 US citizens die as a result of lightning.
Lightning is a form of electrical discharge between charged clouds and the earth. The FM Data Sheet (1984) on lightning indicates that lightning strikes may range from 2,000 to 200,000 A as a result of a potential difference of 5 to 50 million V between clouds and the earth.
The frequency of lightning varies between countries and areas depending on the number of thunderstorm-days per year for the locality. The damage that lightning can cause depends very much on the ground condition, with more damage occurring in areas of high earth resistivity.
The NFPA 780 Standard for the Installation of Lightning Protection Systems (1995b) sets out the design requirements for protection of buildings. While the exact theory of lightning discharges is still being investigated, the basic principle of protection is to provide a means by which a lightning discharge may enter or leave the earth without damaging the building being protected.
Lightning systems, therefore, have two functions:
· to intercept the lightning discharge before it strikes the building
· to provide a harmless discharge path to earth.
This requires buildings to be fitted with:
· lightning rods or masts
· down conductors
· good ground connections, typically 10 ohms or less.
More details for the design of lightning protection for buildings is provided by Davis (1991) in the NFPA Fire Protection Handbook (Cote 1991) and in the British Standards Institute’s Code of Practice (1992).
Overhead transmission lines, transformers, outdoor substations and other electrical installations can be damaged by direct lightning strikes. Electrical transmission equipment can also pick up induced voltage and current surges that can enter buildings. Fires, damage to equipment and serious interruption to operations may result. Surge arresters are required to divert these voltage peaks to ground through effective earthing.
The increased use of sensitive computer equipment in commerce and industry has made operations more sensitive to transient over-voltages induced in power and communication cables in many buildings. Appropriate transient protection is required and special guidance is provided in the British Standards Institute BS 6651:1992, The Protection of Structures Against Lightning.
Proper maintenance of lightning systems is essential for effective protection. Special attention has to be paid to ground connections. If they are not effective, lightning protection systems will be ineffective.
Fire safety engineering work should begin early in the design phase because the fire safety requirements influence the layout and design of the building considerably. In this way, the designer can incorporate fire safety features into the building much better and more economically. The overall approach includes consideration of both interior building functions and layout, as well as exterior site planning. Prescriptive code requirements are more and more replaced by functionally based requirements, which means there is an increased demand for experts in this field. From the beginning of the construction project, the building designer therefore should contact fire experts to elucidate the following actions:
· to describe the fire problem specific to the building
· to describe different alternatives to obtain the required fire safety level
· to analyse system choice regarding technical solutions and economy
· to create presumptions for technical optimized system choices.
The architect must utilize a given site in designing the building and adapt the functional and engineering considerations to the particular site conditions that are present. In a similar manner, the architect should consider site features in arriving at decisions on fire protection. A particular set of site characteristics may significantly influence the type of active and passive protection suggested by the fire consultant. Design features should consider the local fire-fighting resources that are available and the time to reach the building. The fire service cannot and should not be expected to provide complete protection for building occupants and property; it must be assisted by both active and passive building fire defences, to provide reasonable safety from the effects of fire. Briefly, the operations may be broadly grouped as rescue, fire control and property conservation. The first priority of any fire-fighting operation is to ensure that all occupants are out of the building before critical conditions occur.
A well-established means of codifying fire protection and fire safety requirements for buildings is to classify them by types of construction, based upon the materials used for the structural elements and the degree of fire resistance afforded by each element. Classification can be based on furnace tests in accordance with ISO 834 (fire exposure is characterized by the standard temperature-time curve), combination of test and calculation or by calculation. These procedures will identify the standard fire resistance (the ability to fulfil required functions during 30, 60, 90 minutes, etc.) of a structural load-bearing and/or separating member. Classification (especially when based on tests) is a simplified and conservative method and is more and more replaced by functionally based calculation methods taking into account the effect of fully developed natural fires. However, fire tests will always be required, but they can be designed in a more optimal way and be combined with computer simulations. In that procedure, the number of tests can be reduced considerably. Usually, in the fire test procedures, load-bearing structural elements are loaded to 100% of the design load, but in real life the load utilization factor is most often less than that. Acceptance criteria are specific for the construction or element tested. Standard fire resistance is the measured time the member can withstand the fire without failure.
Optimum fire engineering design, balanced against anticipated fire severity, is the objective of structural and fire protection requirements in modern performance-based codes. These have opened the way for fire engineering design by calculation with prediction of the temperature and structural effect due to a complete fire process (heating and subsequent cooling is considered) in a compartment. Calculations based on natural fires mean that the structural elements (important for the stability of the building) and the whole structure are not allowed to collapse during the entire fire process, including cool down.
Comprehensive research has been performed during the past 30 years. Various computer models have been developed. These models utilize basic research on mechanical and thermal properties of materials at elevated temperatures. Some computer models are validated against a vast number of experimental data, and a good prediction of structural behaviour in fire is obtained.
A fire compartment is a space within a building extending over one or several floors which is enclosed by separating members such that the fire spread beyond the compartment is prevented during the relevant fire exposure. Compartmentation is important in preventing the fire to spread into too large spaces or into the whole building. People and property outside the fire compartment can be protected by the fact that the fire is extinguished or burns out by itself or by the delaying effect of the separating members on the spread of fire and smoke until the occupants are rescued to a place of safety.
The fire resistance required by a compartment depends upon its intended purpose and on the expected fire. Either the separating members enclosing the compartment shall resist the maximum expected fire or contain the fire until occupants are evacuated. The load-bearing elements in the compartment must always resist the complete fire process or be classified to a certain resistance measured in terms of periods of time, which is equal or longer than the requirement of the separating members.
The requirement for maintaining structural integrity during a fire is the avoidance of structural collapse and the ability of the separating members to prevent ignition and flame spread into adjacent spaces. There are different approaches to provide the design for fire resistance. They are classifications based on standard fire-resistance test as in ISO 834, combination of test and calculation or solely calculation and the performance-based procedure computer prediction based on real fire exposure.
Interior finish is the material that forms the exposed interior surface of walls, ceilings and floor. There are many types of interior finish materials such as plaster, gypsum, wood and plastics. They serve several functions. Some functions of the interior material are acoustical and insulational, as well as protective against wear and abrasion.
Interior finish is related to fire in four different ways. It can affect the rate of fire build-up to flashover conditions, contribute to fire extension by flame spread, increase the heat release by adding fuel and produce smoke and toxic gases. Materials that exhibit high rates of flame spread, contribute fuel to a fire or produce hazardous quantities of smoke and toxic gases would be undesirable.
In building fires, smoke often moves to locations remote from the fire space. Stairwells and elevator shafts can become smoke-logged, thereby blocking evacuation and inhibiting fire-fighting. Today, smoke is recognized as the major killer in fire situations (see figure 41.4).
The driving forces of smoke movement include naturally occurring stack effect, buoyancy of combustion gases, the wind effect, fan-powered ventilation systems and the elevator piston effect.
When it is cold outside, there is an upward movement of air within building shafts. Air in the building has a buoyant force because it is warmer and therefore less dense than outside air. The buoyant force causes air to rise within building shafts. This phenomenon is known as the stack effect. The pressure difference from the shaft to the outside, which causes smoke movement, is illustrated below:
|ΔPso||= the pressure difference from the shaft to the outside|
|g||= acceleration of gravity|
|Patm||= absolute atmospheric pressure|
|R||= gas constant of air|
|To||= absolute temperature of outside air|
|Ts||= absolute temperature of air inside the shaft|
High-temperature smoke from a fire has a buoyancy force due to its reduced density. The equation for buoyancy of combustion gases is similar to the equation for the stack effect.
In addition to buoyancy, the energy released by a fire can cause smoke movement due to expansion. Air will flow into the fire compartment, and hot smoke will be distributed in the compartment. Neglecting the added mass of the fuel, the ratio of volumetric flows can simply be expressed as a ratio of absolute temperature.
Wind has a pronounced effect on smoke movement. The elevator piston effect should not be neglected. When an elevator car moves in a shaft, transient pressures are produced.
Heating, ventilating and air conditioning (HVAC) systems transport smoke during building fires. When a fire starts in an unoccupied portion of a building, the HVAC system can transport smoke to another occupied space. The HVAC system should be designed so that either the fans are shut down or the system transfers into a special smoke control mode operation.
Smoke movement can be managed by use of one or more of the following mechanisms: compartmentation, dilution, air flow, pressurization or buoyancy.
Egress design should be based upon an evaluation of a building’s total fire protection system (see figure 41.5).
People evacuating from a burning building are influenced by a number of impressions during their escape. The occupants have to make several decisions during the escape in order to make the right choices in each situation. These reactions can differ widely, depending upon the physical and mental capabilities and conditions of building occupants.
The building will also influence the decisions made by the occupants by its escape routes, guidance signs and other installed safety systems. The spread of fire and smoke will have the strongest impact on how the occupants make their decisions. The smoke will limit the visibility in the building and create a non-tenable environment to the evacuating persons. Radiation from fire and flames creates large spaces that cannot be used for evacuation, which increases the risk.
In designing means of egress one first needs a familiarity with the reaction of people in fire emergencies. Patterns of movement of people must be understood.
The three stages of evacuation time are notification time, reaction time and time to evacuate. The notification time is related to whether there is a fire alarm system in the building or if the occupant is able to understand the situation or how the building is divided into compartments. The reaction time depends on the occupant’s ability to make decisions, the properties of the fire (such as the amount of heat and smoke) and how the building’s egress system is planned. Finally, the time to evacuate depends on where in the building crowds are formed and how people move in various situations.
In specific buildings with mobile occupants, for example, studies have shown certain reproducible flow characteristics from persons exiting the buildings. These predictable flow characteristics have fostered computer simulations and modelling to aid the egress design process.
The evacuation travel distances are related to the fire hazard of the contents. The higher the hazard, the shorter the travel distance to an exit.
A safe exit from a building requires a safe path of escape from the fire environment. Hence, there must be a number of properly designed means of egress of adequate capacity. There should be at least one alternative means of egress considering that fire, smoke and the characteristics of occupants and so on may prevent use of one means of egress. The means of egress must be protected against fire, heat and smoke during the egress time. Thus, it is necessary to have building codes that consider the passive protection, according to evacuation and of course to fire protection. A building must manage the critical situations, which are given in the codes concerning evacuation. For example, in the Swedish Building Codes, the smoke layer must not reach below 1.6 + 0.1H (H is the total compartment height), maximum radiation 10 kW/m2 of short duration, and the temperature in the breathing air must not exceed 80 °C.
An effective evacuation can take place if a fire is discovered early and the occupants are alerted promptly with a detection and alarm system. A proper mark of the means of egress surely facilitates the evacuation. There is also a need for organization and drill of evacuation procedures.
How one reacts during a fire is related to the role assumed, previous experience, education and personality; the perceived threat of the fire situation; the physical characteristics and means of egress available within the structure; and the actions of others who are sharing the experience. Detailed interviews and studies over 30 years have established that instances of non-adaptive, or panic, behaviour are rare events that occur under specific conditions. Most behaviour in fires is determined by information analysis, resulting in cooperative and altruistic actions.
Human behaviour is found to pass through a number of identified stages, with the possibility of various routes from one stage to the next. In summary, the fire is seen as having three general stages:
1. The individual receives initial cues and investigates or misinterprets these initial cues.
2. Once the fire is apparent, the individual will try to obtain further information, contact others or leave.
3. The individual will thereafter deal with the fire, interact with others or escape.
Pre-fire activity is an important factor. If a person is engaged in a well-known activity, for example eating a meal in a restaurant, the implications for subsequent behaviour are considerable.
Cue reception may be a function of pre-fire activity. There is a tendency for gender differences, with females more likely to be recipients of noises and odours, though the effect is only slight. There are role differences in initial responses to the cue. In domestic fires, if the female receives the cue and investigates, the male, when told, is likely to “have a look” and delay further actions. In larger establishments, the cue may be an alarm warning. Information may come from others and has been found to be inadequate for effective behaviour.
Individuals may or may not have realized that there is a fire. An understanding of their behaviour must take account of whether they have defined their situation correctly.
When the fire has been defined, the “prepare” stage occurs. The particular type of occupancy is likely to have a great influence on exactly how this stage develops. The “prepare” stage includes in chronological order “instruct”, “explore” and “withdraw”.
The “act” stage, which is the final stage, depends upon role, occupancy, and earlier behaviour and experience. It may be possible for early evacuation or effective fire-fighting to occur.
Building transportation systems must be considered during the design stage and should be integrated with the whole building’s fire protection system. The hazards associated with these systems must be included in any pre-fire planning and fire protection survey.
Building transportation systems, such as elevators and escalators, make high-rise buildings feasible. Elevator shafts can contribute to the spread of smoke and fire. On the other hand, an elevator is a necessary tool for fire-fighting operations in high-rise buildings.
Transportation systems may contribute to dangerous and complicated fire safety problems because an enclosed elevator shaft acts as a chimney or flue because of the stack effect of hot smoke and gases from fire. This generally results in the movement of smoke and combustion products from lower to upper levels of the building.
High-rise buildings present new and different problems to fire-suppression forces, including the use of elevators during emergencies. Elevators are unsafe in a fire for several reasons:
1. Persons may push a corridor button and have to wait for an elevator that may never respond, losing valuable escape time.
2. Elevators do not prioritize car and corridor calls, and one of the calls may be at the fire floor.
3. Elevators cannot start until the lift and shaft doors are closed, and panic could lead to overcrowding of an elevator and the blockage of the doors, which would thus prevent closing.
4. The power can fail during a fire at any time, thus leading to entrapment. (See figure 41.6)
A proper mark of the means of egress facilitates the evacuation, but it does not ensure life safety during fire. Exit drills are necessary to make an orderly escape. They are specially required in schools, board and care facilities and industries with high hazard. Employee drills are required, for example, in hotel and large business occupancies. Exit drills should be conducted to avoid confusion and ensure the evacuation of all occupants.
All employees should be assigned to check for availability, to count occupants when they are outside the fire area, to search for stragglers and to control re-entry. They should also recognize the evacuation signal and know the exit route they are to follow. Primary and alternative routes should be established, and all employees should be trained to use either route. After each exit drill, a meeting of responsible managers should be held to evaluate the success of the drill and to solve any kind of problem that could have occurred.
As the primary importance of any fire protection measure is to provide an acceptable degree of life safety to inhabitants of a structure, in most countries legal requirements applying to fire protection are based on life safety concerns. Property protection features are intended to limit physical damage. In many cases these objectives are complementary. Where concern exists with the loss of property, its function or contents, an owner may choose to implement measures beyond the required minimum necessary to address life safety concerns.
A fire detection and alarm system provides a means to detect fire automatically and to warn building occupants of the threat of fire. It is the audible or visual alarm provided by a fire detection system that is the signal to begin the evacuation of the occupants from the premises. This is especially important in large or multi-storey buildings where occupants would be unaware that a fire was underway within the structure and where it would be unlikely or impractical for warning to be provided by another inhabitant.
A fire detection and alarm system may include all or some of the following:
1. a system control unit
2. a primary or main electrical power supply
3. a secondary (stand-by) power supply, usually supplied from batteries or an emergency generator
4. alarm-initiating devices such as automatic fire detectors, manual pull stations and/or sprinkler system flow devices, connected to “initiating circuits” of the system control unit
5. alarm-indicating devices, such as bells or lights, connected to “indicating circuits” of the system control unit
6. ancillary controls such as ventilation shut-down functions, connected to output circuits of the system control unit
7. remote alarm indication to an external response location, such as the fire department
8. control circuits to activate a fire protection system or smoke control system.
To reduce the threat of smoke from entering exit paths during evacuation from a structure, smoke control systems can be used. Generally, mechanical ventilation systems are employed to supply fresh air to the exit path. This method is most often used to pressurize stairways or atrium buildings. This is a feature intended to enhance life safety.
Portable fire extinguishers and water hose reels are often provided for use by building occupants to fight small fires (see figure 41.7). Building occupants should not be encouraged to use a portable fire extinguisher or hose reel unless they have been trained in their use. In all cases, operators should be very cautious to avoid placing themselves in a position where safe egress is blocked. For any fire, no matter how small, the first action should always be to notify other building occupants of the threat of fire and summon assistance from the professional fire service.
Water sprinkler systems consist of a water supply, distribution valves and piping connected to automatic sprinkler heads (see figure 41.8). While current sprinkler systems are primarily intended to control the spread of fire, many systems have accomplished complete extinguishment.
A common misconception is that all automatic sprinkler heads open in the event of a fire. In fact, each sprinkler head is designed to open only when sufficient heat is present to indicate a fire. Water then flows only from the sprinkler head(s) that have opened as the result of fire in their immediate vicinity. This design feature provides efficient use of water for fire-fighting and limits water damage.
Water for an automatic sprinkler system must be available in sufficient quantity and at sufficient volume and pressure at all times to ensure reliable operation in the event of fire. Where a municipal water supply cannot meet this requirement, a reservoir or pump arrangement must be provided to provide a secure water supply.
Control valves should be maintained in the open position at all times. Often, supervision of the control valves can be accomplished by the automatic fire alarm system by provision of valve tamper switches that will initiate a trouble or supervisory signal at the fire alarm control panel to indicate a closed valve. If this type of monitoring cannot be provided, the valves should be locked in the open position.
Water flows through a piping network, ordinarily suspended from the ceiling, with the sprinkler heads suspended at intervals along the pipes. Piping used in sprinkler systems should be of a type that can withstand a working pressure of not less than 1,200 kPa. For exposed piping systems, fittings should be of the screwed, flanged, mechanical joint or brazed type.
A sprinkler head consists of an orifice, normally held closed by a temperature-sensitive releasing element, and a spray deflector. The water discharge pattern and spacing requirements for individual sprinkler heads are used by sprinkler designers to ensure complete coverage of the protected risk.
Special extinguishing systems are used in cases where water sprinklers would not provide adequate protection or where the risk of damage from water would be unacceptable. In many cases where water damage is of concern, special extinguishing systems may be used in conjunction with water sprinkler systems, with the special extinguishing system designed to react at an early stage of fire development.
Water spray systems increase the effectiveness of water by producing smaller water droplets, and thus a greater surface area of water is exposed to the fire, with a relative increase in heat absorption capability. This type of system is often chosen as a means of keeping large pressure vessels, such as butane spheres, cool when there is a risk of an exposure fire originating in an adjacent area. The system is similar to a sprinkler system; however, all heads are open, and a separate detection system or manual action is used to open control valves. This allows water to flow through the piping network to all spray devices that serve as outlets from the piping system.
In a foam system, a liquid concentrate is injected into the water supply before the control valve. Foam concentrate and air are mixed, either through the mechanical action of discharge or by aspirating air into the discharge device. The air entrained in the foam solution creates an expanded foam. As expanded foam is less dense than most hydrocarbons, the expanded foam forms a blanket on top of the flammable liquid. This foam blanket reduces fuel vapour propagation. Water, which represents as much as 97% of the foam solution, provides a cooling effect to further reduce vapour propagation and to cool hot objects that could serve as a source of re-ignition.
Carbon dioxide systems consist of a supply of carbon dioxide, stored as liquified compressed gas in pressure vessels (see figure 41.9 and figure 41.10). The carbon dioxide is held in the pressure vessel by means of an automatic valve that is opened upon fire by means of a separate detection system or by manual operation. Once released, the carbon dioxide is delivered to the fire by means of a piping and discharge nozzle arrangement. Carbon dioxide extinguishes fire by displacing the oxygen available to the fire. Carbon dioxide systems can be designed for use in open areas such as printing presses or enclosed volumes such as ship machinery spaces. Carbon dioxide, at fire-extinguishing concentrations, is toxic to people, and special measures must be employed to ensure that persons in the protected area are evacuated before discharge occurs. Pre-discharge alarms and other safety measures must be carefully incorporated into the design of the system to ensure adequate safety for people working in the protected area. Carbon dioxide is considered to be a clean extinguishant because it does not cause collateral damage and is electrically non-conductive.
Inert gas systems generally use a mixture of nitrogen and argon as an extinguishing medium. In some cases, a small percentage of carbon dioxide is also provided in the gas mixture. The inert gas mixtures extinguish fires by reducing oxygen concentration within a protected volume. They are suitable for use in enclosed spaces only. The unique feature offered by inert gas mixtures is that they reduce the oxygen to a low enough concentration to extinguish many types of fires; however, oxygen levels are not sufficiently lowered to pose an immediate threat to occupants of the protected space. The inert gases are compressed and stored in pressure vessels. System operation is similar to a carbon dioxide system. As the inert gases cannot be liquified by compression, the number of storage vessels required for protection of a given enclosed protected volume is greater than that for carbon dioxide.
Halons 1301, 1211 and 2402 have been identified as ozone-depleting substances. Production of these extinguishing agents ceased in 1994, as required by the Montreal Protocol, an international agreement to protect the earth’s ozone layer. Halon 1301 was most often used in fixed fire protection systems. Halon 1301 was stored as liquified, compressed gas in pressure vessels in a similar arrangement to that used for carbon dioxide. The advantage offered by halon 1301 was that storage pressures were lower and that very low concentrations provided effective extinguishing capability. Halon 1301 systems were used successfully for totally enclosed hazards where the extinguishing concentration achieved could be maintained for a sufficient time for extinguishment to occur. For most risks, concentrations used did not pose an immediate threat to occupants. Halon 1301 is still used for several important applications where acceptable alternatives have yet to be developed. Examples include use on-board commercial and military aircraft and for some special cases where inerting concentrations are required to prevent explosions in areas where occupants could be present. The halon in existing halon systems that are no longer required should be made available for use by others with critical applications. This will militate against the need to produce more of these environmentally sensitive extinguishers and help protect the ozone layer.
Halocarbon agents were developed as the result of the environmental concerns associated with halons. These agents differ widely in toxicity, environmental impact, storage weight and volume requirements, cost and availability of approved system hardware. They all can be stored as liquified compressed gases in pressure vessels. System configuration is similar to a carbon dioxide system.
Only those skilled in this work are competent to design, install and maintain this equipment. It may be necessary for many of those charged with purchasing, installing, inspecting, testing, approving and maintaining this equipment to consult with an experienced and competent fire protection specialist to discharge their duties effectively.
This section of the Encyclopaedia presents a very brief and limited overview of the available choice of active fire protection systems. Readers may often obtain more information by contacting a national fire protection association, their insurer or the fire prevention department of their local fire service.
Profit is the main objective of any industry. To achieve this objective, an efficient and alert management and continuity of production are essential. Any interruption in production, for any reason, will adversely affect profits. If the interruption is the result of a fire or explosion, it may be long and may cripple the industry.
Very often, a plea is taken that the property is insured and loss due to fire, if any, will be indemnified by the insurance company. It must be appreciated that insurance is only a device to spread the effect of the destruction brought by fire or explosion on as many people as possible. It cannot make good the national loss. Besides, insurance is no guarantee of continuity of production and elimination or minimization of consequential losses.
What is indicated, therefore, is that the management must gather complete information on the fire and explosion hazard, evaluate the loss potential and implement suitable measures to control the hazard, with a view to eliminating or minimizing the incidence of fire and explosion. This involves the setting up of a private emergency organization.
Such an organization must, as far as possible, be considered from the planning stage itself, and implemented progressively from the time of selection of site until production has started, and then continued thereafter.
Success of any emergency organization depends to a large extent on the overall participation of all workers and various echelons of the management. This fact must be borne in mind while planning the emergency organization.
The various aspects of emergency planning are mentioned below. For more details, a reference may be made to the US National Fire Protection Association (NFPA) Fire Protection Handbook or any other standard work on the subject (Cote 1991).
Initiate the emergency plan by doing the following:
1. Identify and evaluate fire and explosion hazards associated with the transportation, handling and storage of each raw material, intermediate and finished products and each industrial process, as well as work out detailed preventive measures to counteract the hazards with a view to eliminating or minimizing them.
2. Work out the requirements of fire protection installations and equipment, and determine the stages at which each is to be provided.
3. Prepare specifications for the fire protection installation and equipment.
Determine the following:
1. availability of adequate water supply for fire protection in addition to the requirements for processing and domestic use
2. susceptibility of site and natural hazards, such as floods, earthquakes, heavy rains, etc.
3. environments, i.e., the nature and extent of surrounding property and the exposure hazard involved in the event of a fire or explosion
4. existence of private (works) or public fire brigade(s), the distance at which such fire brigade(s) is (are) located and the suitability of the appliances available with them for the risk to be protected and whether they can be called upon to assist in an emergency
5. response from the assisting fire brigade(s) with particular reference to impediments, such as railway crossings, ferries, inadequate strength and (or) width of bridges in relation to the fire appliances, difficult traffic, etc.
6. socio-political environment , i.e., incidence of crime, and political activities leading to law-and-order problems.
Prepare the layout and building plans, and the specifications of construction material. Carry out the following tasks:
1. Limit the floor area of each shop, workplace, etc. by providing fire walls, fire doors, etc.
2. Specify the use of fire-resistant materials for construction of building or structure.
3. Ensure that steel columns and other structural members are not exposed.
4. Ensure adequate separation between building, structures and plant.
5. Plan installation of fire hydrants, sprinklers, etc. where necessary.
6. Ensure the provision of adequate access roads in the layout plan to enable fire appliances to reach all parts of the premises and all sources of water for fire-fighting.
During construction, do the following:
1. Acquaint the contractor and his or her employees with the fire risk management policies, and enforce compliance.
2. Thoroughly test all fire protection installations and equipment before acceptance.
If the size of the industry, its hazards or its out-of-the-way location is such that a full-time fire brigade must be available on the premises, then organize, equip and train the required full-time personnel. Also appoint a full-time fire officer.
To ensure full participation of all employees, do the following:
1. Train all personnel in the observance of precautionary measures in their day-to-day work and the action required of them upon an outbreak of fire or explosion. The training must include operation of fire-fighting equipment.
2. Ensure strict observance of fire precautions by all concerned personnel through periodic reviews.
3. Ensure regular inspection and maintenance of all fire protection systems and equipment. All defects must be rectified promptly.
To avoid confusion at the time of an actual emergency, it is essential that everyone in the organization knows the precise part that he (she) and others are expected to play during the emergency. A well-thought-out emergency plan must be prepared and promulgated for this purpose, and all concerned personnel must be made fully familiar with it. The plan must clearly and unambiguously lay down the responsibilities of all concerned and also specify a chain of command. As a minimum, the emergency plan should include the following:
1. name of the industry
2. address of the premises, with telephone number and a site plan
3. purpose and objective of the emergency plan and effective date of its coming in force
4. area covered, including a site plan
5. emergency organization, indicating chain of command from the work manager on downwards
6. fire protection systems, mobile appliances and portable equipment, with details
7. details of assistance availability
8. fire alarm and communication facilities
9. action to be taken in an emergency. Include separately and unambiguously the action to be taken by:
· the person discovering the fire
· the private fire brigade on the premises
· head of the section involved in the emergency
· heads of other sections not actually involved in the emergency
· the security organization
· the fire officer, if any
· the works manager
10. chain of command at the scene of the incident. Consider all possible situations, and indicate clearly who is to assume command in each case, including the circumstances under which another organization is to be called in to assist.
11. action after a fire. Indicate responsibility for:
· recommissioning or replenishing of all fire protection systems, equipment and water sources
· investigating the cause of fire or explosion
· preparation and submission of reports
· initiating remedial measures to prevent re-occurrence of similar emergency.
When a mutual assistance plan is in operation, copies of emergency plan must be supplied to all participating units in return for similar plans of their respective premises.
A situation necessitating the execution of the emergency plan may develop as a result of either an explosion or a fire.
Explosion may or may not be followed by fire, but in almost all cases, it produces a shattering effect, which may injure or kill personnel present in the vicinity and/or cause physical damage to property, depending upon the circumstances of each case. It may also cause shock and confusion and may necessitate the immediate shut-down of the manufacturing processes or a portion thereof, along with the sudden movement of a large number of people. If the situation is not controlled and guided in an orderly manner immediately, it may lead to panic and further loss of life and property.
Smoke given out by the burning material in a fire may involve other parts of the property and/or trap persons, necessitating an intensive, large-scale rescue operation/evacuation. In certain cases, large-scale evacuation may have to be undertaken when people are likely to get trapped or affected by fire.
In all cases in which large-scale sudden movement of personnel is involved, traffic problems are also createdparticularly if public roads, streets or areas have to be used for this movement. If such problems are not anticipated and suitable action is not preplanned, traffic bottlenecks result, which hamper and retard fire extinguishment and rescue efforts.
Evacuation of a large number of personsparticularly from high-rise buildingsmay also present problems. For successful evacuation, it is not only necessary that adequate and suitable means of escape are available, but also that the evacuation be effected speedily. Special attention should be given to the evacuation needs of disabled individuals.
Detailed evacuation procedures must, therefore, be included in the emergency plan. These must be frequently tested in the conduct of fire and evacuation drills, which may also involve traffic problems. All participating and concerned organizations and agencies must also be involved in these drills, at least periodically. After each exercise, a debriefing session must be held, during which all mistakes are pointed out and explained. Action must also be taken to prevent repetition of the same mistakes in future exercises and actual incidents by removing all difficulties and reviewing the emergency plan as necessary.
Proper records must be maintained of all exercises and evacuation drills.
Casualties in a fire or explosion must receive immediate medical aid or be moved speedily to a hospital after being given first aid.
It is essential that management provide one or more first-aid post(s) and, where necessary because of the size and hazardous nature of the industry, one or more mobile paramedical appliances. All first-aid posts and paramedical appliances must be staffed at all times by fully trained paramedics.
Depending upon the size of the industry and the number of workers, one or more ambulance(s) must also be provided and staffed on the premises for removal of casualties to hospitals. In addition, arrangement must be made to ensure that additional ambulance facilities are available at short notice when needed.
Where the size of the industry or workplace so demands, a full-time medical officer should also be made available at all times for any emergency situation.
Prior arrangements must be made with a designated hospital or hospitals at which priority is given to casualties who are removed after a fire or explosion. Such hospitals must be listed in the emergency plan along with their telephone numbers, and the emergency plan must have suitable provisions to ensure that a responsible person shall alert them to receive casualties as soon as an emergency arises.
It is important that all fire protection and emergency facilities are restored to a “ready” mode soon after the emergency is over. For this purpose, responsibility must be assigned to a person or section of the industry, and this must be included in the emergency plan. A system of checks to ensure that this is being done must also be introduced.
It is not practicable for any management to foresee and provide for all possible contingencies. It is also not economically feasible to do so. In spite of adopting the most up-to-date method of fire risk management, there are always occasions when the fire protection facilities provided on the premises fall short of actual needs. For such occasions, it is desirable to preplan a mutual assistance programme with the public fire department. Good liaison with that department is necessary so that the management knows what assistance that unit can provide during an emergency on its premises. Also, the public fire department must become familiar with the risk and what it could expect during an emergency. Frequent interaction with the public fire department is necessary for this purpose.
Hazards of the materials used in industry may not be known to fire-fighters during a spill situation, and accidental discharge and improper use or storage of hazardous materials can lead to dangerous situations that can seriously imperil their health or lead to a serious fire or explosion. It is not possible to remember the hazards of all materials. Means of ready identification of hazards have, therefore, been developed whereby the various substances are identified by distinct labels or markings.
Each country follows its own rules concerning the labelling of hazardous materials for the purpose of storage, handling and transportation, and various departments may be involved. While compliance with local regulations is essential, it is desirable that an internationally recognized system of identification of hazardous materials be evolved for universal application. In the United States, the NFPA has developed a system for this purpose. In this system, distinct labels are conspicuously attached or affixed to containers of hazardous materials. These labels indicate the nature and degree of hazards in respect of health, flammability and the reactive nature of the material. In addition, special possible hazards to fire-fighters can also be indicated on these labels. For an explanation of the degree of hazard, refer to NFPA 704, Standard System for the Identification of the Fire Hazards of Materials (1990a). In this system, the hazards are categorized as health hazards, flammability hazards, and reactivity (instability) hazards.
These include all possibilities of a material causing personal injury from contact with or absorption into the human body. A health hazard may arise out of the inherent properties of the material or from the toxic products of combustion or decomposition of the material. The degree of hazard is assigned on the basis of the greater hazard that may result under fire or other emergency conditions. It indicates to fire-fighters whether they can work safely only with special protective clothing or with suitable respiratory protective equipment or with ordinary clothing.
Degree of health hazard is measured on a scale of 4 to 0, with 4 indicating the most severe hazard and 0 indicating low hazard or no hazard.
These indicate the susceptibility of the material to burning. It is recognized that materials behave differently in respect of this property under varying circumstances (e.g., materials that may burn under one set of conditions may not burn if the conditions are altered). The form and inherent properties of the materials influence the degree of hazard, which is assigned on the same basis as for the health hazard.
Materials capable of releasing energy by itself, (i.e., by self-reaction or polymerization) and substances that can undergo violent eruption or explosive reactions on coming in contact with water, other extinguishing agents or certain other materials are said to possess a reactivity hazard.
The violence of reaction may increase when heat or pressure is applied or when the substance comes in contact with certain other materials to form a fuel-oxidizer combination, or when it comes in contact with incompatible substances, sensitizing contaminants or catalysts.
The degree of reactivity hazard is determined and expressed in terms of the ease, rate and quantity of energy release. Additional information, such as radioactivity hazard or prohibition of water or other extinguishing medium for fire-fighting, can also be given on the same level.
The label warning of a hazardous material is a diagonally placed square with four smaller squares (see figure 41.11).
The top square indicates the health hazard, the one on the left indicates the flammability hazard, the one on the right indicates the reactivity hazard, and the bottom square indicates other special hazards, such as radioactivity or unusual reactivity with water.
To supplement the above mentioned arrangement, a colour code may also be used. The colour is used as background or the numeral indicating the hazard may be in coded colour. The codes are health hazard (blue), flammability hazard (red), reactivity hazard (yellow) and special hazard (white background).
Depending on the nature of the hazardous material in the industry, it is necessary to provide protective equipment and special fire-extinguishing agents, including the protective equipment required to dispense the special extinguishing agents.
All workers must be trained in the precautions they must take and the procedures they must adopt to deal with each incident in the handling of the various types of hazardous materials. They must also know the meaning of the various identification signs.
All fire-fighters and other workers must be trained in the correct use of any protective clothing, protective respiratory equipment and special fire-fighting techniques. All concerned personnel must be kept alert and prepared to tackle any situation through frequent drills and exercises, of which proper records should be kept.
To deal with serious medical hazards and the effects of these hazards on fire-fighters, a competent medical officer should be available to take immediate precautions when any individual is exposed to unavoidable dangerous contamination. All affected persons must receive immediate medical attention.
Proper arrangements must also be made to set up a decontamination centre on the premises when necessary, and correct decontamination procedures must be laid down and followed.
Considerable waste is generated by industry or because of accidents during handling, transportation and storage of goods. Such waste may be flammable, toxic, corrosive, pyrophoric, chemically reactive or radioactive, depending upon the industry in which it is generated or the nature of goods involved. In most cases unless proper care is taken in safe disposal of such waste, it may endanger animal and human life, pollute the environment or cause fire and explosions that may endanger property. A thorough knowledge of the physical and chemical properties of the waste materials and of the merits or limitations of the various methods of their disposal is, therefore, necessary to ensure economy and safety.
Properties of industrial waste are briefly summarized below:
1. Most industrial waste is hazardous and can have unexpected significance during and after disposal. The nature and behavioural characteristics of all waste must therefore be carefully examined for their short- and long-term impact and the method of disposal determined accordingly.
2. Mixing of two seemingly innocuous discarded substances may create an unexpected hazard because of their chemical or physical interaction.
3. Where flammable liquids are involved, their hazards can be assessed by taking into consideration their respective flash points, ignition temperature, flammability limits and the ignition energy required to initiate combustion. In the case of solids, particle size is an additional factor that must be considered.
4. Most flammable vapours are heavier than air. Such vapours and heavier-than-air flammable gases that may be accidentally released during collection or disposal or during handling and transportation can travel considerable distances with the wind or towards a lower gradient. On coming in contact with a source of ignition, they flash back to source. Major spills of flammable liquids are particularly hazardous in this respect and may require evacuation to save lives.
5. Pyrophoric materials, such as aluminium alkyls, ignite spontaneously when exposed to air. Special care must therefore be taken in handling, transportation, storage and disposal of such materials, preferably carried out under a nitrogen atmosphere.
6. Certain materials, such as potassium, sodium and aluminium alkyls, react violently with water or moisture and burn fiercely. Bronze powder generates considerable heat in the presence of moisture.
7. The presence of potent oxidants with organic materials can cause rapid combustion or even an explosion. Rags and other materials soaked with vegetable oils or terpenes present a risk of spontaneous combustion due to the oxidation of oils and subsequent build-up of heat to the ignition temperature.
8. Several substances are corrosive and may cause severe damage or burns to skin or other living tissues, or may corrode construction materials, especially metals, thereby weakening the structure in which such materials may have been used.
9. Some substances are toxic and can poison humans or animals by contact with skin, inhalation or contamination of food or water. Their ability to do so may be short lived or may extend over a long period. Such substances, if disposed of by dumping or burning, can contaminate water sources or come into contact with animals or workers.
10. Toxic substances that are spilled during industrial processing, transportation (including accidents), handling or storage, and toxic gases that are released into the atmosphere can affect emergency personnel and others, including the public. The hazard is all the more severe if the spilled substance(s) is vaporized at ambient temperature, because the vapours can be carried over long distances due to wind drift or run-off.
11. Certain substances may emit a strong, pungent or unpleasant odour, either by themselves or when they are burnt in the open. In either case, such substances are a public nuisance, even though they may not be toxic, and they must be disposed of by proper incineration, unless it is possible to collect and recycle them. Just as odorous substances are not necessarily toxic, odourless substances and some substances with a pleasant odour may produce harmful physiological effects.
12. Certain substances, such as explosives, fireworks, organic peroxides and some other chemicals, are sensitive to heat or shock and may explode with devastating effect if not handled carefully or mixed with other substances. Such substances must, therefore, be carefully segregated and destroyed under proper supervision.
13. Waste materials that are contaminated with radioactivity can be as hazardous as the radioactive materials themselves. Their disposal requires specialized knowledge. Proper guidance for disposal of such waste may be obtained from a country’s nuclear energy organization.
Some of the methods that may be employed to dispose of industrial and emergency waste are biodegradation, burial, incineration, landfill, mulching, open burning, pyrolysis and disposal through a contractor. These are briefly explained below.
Many chemicals are completely destroyed within six to 24 months when they are mixed with the top 15 cm of soil. This phenomenon is known as biodegradation and is due to the action of soil bacteria. Not all substances, however, behave in this way.
Waste, particularly chemical waste, is often disposed of by burial. This is a dangerous practice in so far as active chemicals are concerned, because, in time, the buried substance may get exposed or leached by rain into water resources. The exposed substance or the contaminated material can have adverse physiological effects when it comes in contact with water that is drunk by humans or animals. Cases are on record in which water was contaminated 40 years after burial of certain harmful chemicals.
This is one of the safest and most satisfactory methods of waste disposal if the waste is burned in a properly designed incinerator under controlled conditions. Care must be taken, however, to ensure that the substances contained in the waste are amenable to safe incineration without posing any operating problem or special hazard. Almost all industrial incinerators require the installation of air pollution control equipment, which must be carefully selected and installed after taking into consideration the composition of the stock effluent given out by the incinerator during the burning of industrial waste.
Care must be taken in the operation of the incinerator to ensure that its operative temperature does not rise excessively either because a large amount of volatiles is fed or because of the nature of the waste burned. Structural failure can occur because of excessive temperature, or, over time, because of corrosion. The scrubber must also be periodically inspected for signs of corrosion which can occur because of contact with acids, and the scrubber system must be maintained regularly to ensure proper functioning.
Low-lying land or a depression in land is often used as a dump for waste materials until it becomes level with the surrounding land. The waste is then levelled, covered with earth and rolled hard. The land is then used for buildings or other purposes.
For satisfactory landfill operation, the site must be selected with due regard to the proximity of pipelines, sewer lines, power lines, oil and gas wells, mines and other hazards. The waste must then be mixed with earth and evenly spread out in the depression or a wide trench. Each layer must be mechanically compacted before the next layer is added.
A 50 cm layer of earth is typically laid over the waste and compacted, leaving sufficient vents in the soil for the escape of gas that is produced by biological activity in the waste. Attention must also be paid to proper drainage of the landfill area.
Depending on the various constituents of waste material, it may at times ignite within the landfill. Each such area must, therefore, be properly fenced off and continued surveillance maintained until the chances of ignition appear to be remote. Arrangements must also be made for extinguishing any fire that may break out in the waste within the landfill.
Some trials have been made for reusing polymers as mulch (loose material for protecting the roots of plants) by chopping the waste into small shreds or granules. When so used, it degrades very slowly. Its effect on the soil is, therefore, purely physical. This method has, however, not been used widely.
Open burning of waste causes pollution of the atmosphere and is hazardous in as much as there is a chance of the fire getting out of control and spreading to the surrounding property or areas. Also, there is a chance of explosion from containers, and there is a possibility of harmful physiological effects of radioactive materials that may be contained in the waste. This method of disposal has been banned in some countries. It is not a desirable method and should be discouraged.
Recovery of certain compounds, by distillation of the products given out during pyrolysis (decomposition by heating) of polymers and organic substances, is possible, but not yet widely adopted.
This is probably the most convenient method. It is important that only reliable contractors who are knowledgeable and experienced in the disposal of industrial waste and hazardous materials are selected for the job. Hazardous materials must be carefully segregated and disposed of separately.
Specific examples of the types of hazardous materials that are often found in today’s industry include: (1) combustible and reactive metals, such as magnesium, potassium, lithium, sodium, titanium and zirconium; (2) combustible refuse; (3) drying oils; (4) flammable liquids and waste solvents; (5) oxidizing materials (liquids and solids); and (6) radioactive materials. These materials require special handling and precautions that must be carefully studied. For more details on identification of hazardous materials and hazards of industrial materials, the following publications may be consulted: Fire Protection Handbook (Cote 1991) and Sax’s Dangerous Properties of Industrial Materials (Lewis 1979). | http://ilocis.org/documents/chpt41e.htm | 13 |
101 | NOTE: This page is a continuation of the notes and
worksheets for topic 9.2 Space. Two separate pages were
used for this topic because of the large volume of material in the topic.
This will keep download time within acceptable limits.
9.2 Space Continued
Relativity is the study of the relative motions
of objects. Einstein’s
Theory of Relativity is one of the greatest intellectual
achievements of the 20th Century.
Special Relativity, developed by Einstein in 1905,
deals with systems that are moving at constant velocity (no
acceleration) with respect to each other.
General Relativity proposed in 1916 deals with systems
that are accelerating with respect to each other.
Before commencing our study of Relativity some preliminary
definitions are necessary.
A reference frame can be considered to be a
set of axes with respect to which distance measurements can be made. A set of recording clocks can be considered to be embedded in
the frame to specify time.
inertial reference frame is defined as one in which Newton’s First
Law (his law of inertia) is valid.
In other words, an inertial reference frame is one that is
A non-inertial reference frame is one that
A physical event can be considered to be
something that happens independently of the reference frame used to
describe it – eg lightning flashes.
An event can be characterized in a Cartesian reference frame
by stating its coordinates x, y, z and t.
Brief History of Relativity Before Einstein
The phenomenon of motion has been studied for thousands
of years. To the ancient Greek philosopher Aristotle
it was obvious that objects would assume a preferred state
of rest unless some external force propelled them.
He also believed in the concepts of Absolute Space
and Absolute Time – that is that both space and time
exist in their own right, independently of each other and
of other material things (Refs 1, 2 & 3). Thus, to
Aristotle it was possible to assign absolute values of position
and time to events. Aristotle’s work was held in such
high regard that it remained basically unchallenged until
the end of the sixteenth century, when Galileo showed
that it was incorrect.
The view that motion must be relative – that is,
it involves displacements of objects relative to some reference
system – had its beginnings with Galileo. Galileo’s
experiments and “thought experiments” led him to state what
is now called the Principle of Galilean Relativity: the
laws of mechanics are the same for a body at rest and a
body moving at constant velocity.
Using Galileo’s measurements as a starting point Isaac
Newton developed his Laws of Motion and his Law of Universal
Gravitation. Newton showed that it is only possible
to determine the relative velocity of one reference frame
with respect to another and not the absolute velocity of
either frame. So, as far as mechanics is concerned,
no preferred or absolute reference frame exists.
The Principle of Newtonian Relativity may be stated
as: the laws of mechanics must be the same in all inertial
Thus, due to Galileo and Newton, the concept of Absolute
Space became redundant since there could be no absolute
reference frame with respect to which mechanical measurements
could be made. However, Galileo and Newton retained
the concept of Absolute Time, or the ability to establish
that two events that happened at different locations occurred
at the same time (1). In other words, if an observer
in one reference frame observed two events at different
locations as occurring simultaneously, then all observers
in all reference frames would agree that the events were
The Newtonian concept of the structure of space and time
remained unchallenged until the development of the electromagnetic
theory in the nineteenth century, principally by Michael
Faraday and James Clerk Maxwell. Maxwell showed that
electromagnetic waves in a vacuum ought to propagate at
a speed of c = 3 x 108 m/s, the speed of
light (1). To 19th Century physicists this
presented a problem. If EM waves were supposed to
propagate at this fixed speed c, what was this speed measured
relative to? How could you measure it relative to
a vacuum? Newton had done away with the idea of an
absolute reference frame (2).
Quite apart from the relativity problem, it seemed inconceivable
to 19th Century physicists that light and other
EM waves, in contrast to all other kinds of waves, could
propagate without a medium. It seemed to be a logical
step to postulate such a medium, called the aether (or
ether), even though it was necessary to assume unusual
properties for it, such as zero density and perfect transparency,
to account for its undetectability. This aether
was assumed to fill all space and to be the medium with
respect to which EM waves propagate with the speed c.
It followed, using Newtonian relativity, that an observer
moving through the aether with velocity u would measure
a velocity for a light beam of (c + u) (5).
So theoretically, if the aether exists, an observer on
earth should be able to measure changes in the velocity
of light due to the earth’s motion through the aether.
The Michelson-Morley experiment attempted to do just this.
The Aether Model for
the Transmission of Light
Before moving onto the Michelson-Morley experiment,
we pause to examine in more detail the features of
the aether model for the transmission of light.
When 19th Century physicists chose the aether
as the medium for the propagation of EM waves they were
merely borrowing and adapting an existing concept.
The fact that certain physical events propagate themselves through
astronomic space led long ago to the hypothesis that space is not
empty but is filled with an extremely fine substance, the Aether,
which is the carrier or medium of these phenomena. Indeed
the aether was proposed as the carrier of light
in Rene Descartes’ Dioptrics, which in 1638
became the first published scientific work on optics (4).
In this work, Descartes proposed that the aether was all-pervasive
and made objects visible by transmitting a pressure from the object
to the observer’s eye.
Robert Hooke in 1667
developed pressure wave theories that allowed for the
propagation of light (6). In these theories, luminous objects set up vibrations
that were transmitted through the aether like sound waves
The Dutchman Christiaan
Huygens published a full theory on
the wave nature of light in 1690. According to Huygens,
light was an irregular series of shock waves that proceeded
with great velocity through a continuous medium – the
luminiferous aether. This
aether consisted of minute elastic particles uniformly
compressed together. The movement of light through
the aether was not an actual transfer of these particles
but rather a compression wave moving through the particles.
It was thought that the aether particles were not packed
in rows but were irregular in their orientation so that
a disturbance at one particle would radiate out from it
in all directions
In 1817 the French engineer Augustine Fresnel
and the English scientist Thomas Young independently
deduced that light was a transverse wave motion.
This required a rethink of the nature of the aether, which
until this time had been considered by most scientists
to be a thin fluid of some kind. Transverse
waves can only travel through solid media (or along the
surface of fluids). Clearly, the aether had to
be a solid. The solid also had to be very rigid
to allow for the high velocity at which light travelled
Clearly, this posed a problem, since such a solid would
offer great resistance to the motion of the planets and
yet no such resistance had been noted by astronomers.
In 1845 George Stokes attempted to solve the dilemma
by proposing that the aether acted like pitch or wax
which is rigid for rapidly changing forces but is fluid
under the action of forces applied over long periods of
time. The forces that occur in light vibrations
change extremely quickly (600 x 1012 times
per second) compared with the relatively slow processes
that occur in planetary motions. Thus, the aether
may function for light as an elastic solid but give way
completely to the motions of the planets (4).
In 1865 the great Scottish physicist James Clerk Maxwell
published his theory of electromagnetism, which summarised
the basic properties of electricity and magnetism in four
equations. Maxwell also deduced that light waves
are electromagnetic waves and that all electromagnetic
waves travelled at 3 x 108 m/s relative to
the aether. The aether was now called the electromagnetic
aether rather than the luminiferous aether (4) and
became a kind of absolute reference frame for electromagnetic
Outline the features of the aether model for the
transmission of light. (Note: This is Syllabus point
9.2.4 Column 1 Dot Point 1.)
For several hundred years the aether was believed to be the medium
that acted as a carrier of light waves. The aether was
all-pervasive, permeating all matter as evidenced by the
transmission of light through transparent materials.
Originally, the aether was believed to be a very thin, zero density,
transparent fluid. Young and Fresnel showed that light was a
transverse wave which implied that the aether must be solid and very
rigid to transmit the high velocity of light. Stokes (1845)
proposed that the aether acted like wax - very rigid for
rapidly changing forces (like high velocity light travel) but very
fluid for long continued forces (like the movement of the
planets). Maxwell (1865) used this aether as the absolute
reference frame in which the speed of all EM waves is 3 x 108
The Michelson-Morley Experiment
In 1887 Albert Michelson and Edward Morley
of the USA carried out a very careful experiment at
the Case School of Applied Science in Cleveland.
The aim of the experiment was to measure the motion
of the earth relative to the aether and thereby demonstrate
that the aether existed. Their method involved
using the phenomenon of the interference of light to
detect small changes in the speed of light due to the
earth’s motion through the aether (5).
The whole apparatus is mounted on a solid stone block
for stability and is floated in a bath of mercury so
that it could be rotated smoothly about a central axis
(5). The earth, together
with the apparatus is assumed to be travelling through
the aether with a uniform velocity u of about 30 km/s.
This is equivalent to the earth at rest with the aether
streaming past it at a velocity –u.
Now in the experiment a beam of light from the source
S is split into two beams by a half-silvered mirror
K as shown. One half of the beam travels from
K to M1 and is then reflected back to K,
while the other half is reflected from K to M2
and then reflected from M2 back to K.
At K part of the beam from M1 is reflected
to the observer O and part of the beam from M2
is transmitted to O.
Although the mirrors M1 and M2
are the same distance from K, it is virtually impossible
to have the distances travelled by each beam exactly
equal, since the wavelength of light is so small compared
with the dimensions of the apparatus. Thus, the
two beams would arrive at O slightly out of phase and
would produce an interference pattern at O.
There is also a difference in the time taken by each
beam to traverse the apparatus and arrive at O, since
one beam travels across the aether stream direction
while the other travels parallel and then anti-parallel
to the aether stream direction (see the note below).
This difference in time taken for each beam to arrive
at O would also introduce a phase difference and would
thus influence the interference pattern.
Now if the apparatus were to be rotated through 90o,
the phase difference due to the path difference of each
beam would not change. However, as the direction
of the light beams varied with the direction of flow
of the aether, their relative velocities would alter
and thus the difference in time required for each beam
to reach O would alter. This would result in
a change in the interference pattern as the apparatus
The Michelson-Morley apparatus was capable of detecting
a phase change of as little as 1/100 of a fringe.
The expected phase change was 4/10 of a fringe.
However, no such change was observed.
Thus, the result of the Michelson-Morley experiment
was that no motion of the earth relative to the aether
was detected. Since the experiment failed
in its objective, the result is called a null
result. The experiment has since been repeated
many times and the same null result has always been
NOTE: This time difference
mentioned above comes about from classical vector work.
After the original beam is split at K the half transmitted
to M1 travels with velocity (c + u)
relative to the “stationary” earth, as it is travelling
in the direction of “flow” of the aether. When
it is reflected from M1 it travels towards
K with a velocity relative to the earth of (c – u)
against the motion of the aether stream. Thus,
the time taken for the total journey of this beam from
K to M1 and back again is:
However, the other beam travels with velocity Ö (c2 – u2) towards
M2 and then with the same speed in the opposite
direction away from M2 after reflection.
Thus, the time for the total journey of the beam from
K to M2 and back again is:
Clearly, t1 and t2 are different.
The Role of the Michelson-Morley
Michelson-Morley experiment is an excellent example of a critical
experiment in science. The
fact that no motion of the earth relative
to the aether was detected suggested quite strongly that
the aether hypothesis was incorrect and that no aether (absolute)
reference frame existed for electromagnetic phenomena.
This opened the way for a whole new way of thinking
that was to be proposed by Albert Einstein in his Theory of Special
is worth noting that the null result of the Michelson-Morley
experiment was such a blow to the aether hypothesis in particular
and to theoretical physics in general that the experiment was
repeated by many scientists over more than 50 years. A null result has always been obtained.
detail - beyond what is required by the Syllabus: The
aether hypothesis had become so entrenched in 19th
Century Physics thinking that many scientists ignored the
significance of the null result and instead, looked for alternative
hypotheses to explain the null result:
The Fitzgerald-Lorentz Contraction Hypothesis
– in which all bodies are contracted in the direction of motion
relative to the stationary aether by a factor of Ö
1 – (v2 / c2).
This was contradicted when the arms of the Michelson-Morley
interferometer were made unequal in the Kennedy-Thorndyke
The Aether Drag Hypothesis – in which the
aether was believed to be dragged along by all bodies of finite
mass. This too was
contradicted both on astronomical grounds (see the Bradley
Aberration – ref. 5) and by experiment (see Fizeau’s experiment
– ref. 5).
Attempts were also made to modify electromagnetic
theory itself. Emission
theories suggested that the velocity of light is c relative to
the original source and that this velocity is independent of the
state of motion of the medium transmitting the light.
This automatically explains the null result.
It was found though that all such emission theories could be
directly contradicted by experiment.
physicists like Lorentz (1899), Larmor (1900) and Poincare (1905)
showed that the changes needed to make the aether hypothesis
consistent with the null result of the Michelson-Morley experiment
implied that the aether (absolute) reference frame was impossible.
The aether ceased to exist as a real substance (4).
Principle of Relativity
A relativity principle is a statement of what the
invariant quantities are between different reference
frames. It says that for such quantities the
reference frames are equivalent to one another, no
one having an absolute or privileged status relative
to the others. So, for example, Newton’s relativity
principle tells us that all inertial reference frames
are equivalent with respect to the laws of mechanics.
As we have seen, for quite a while in the 19th
Century it looked as if there was a preferred or absolute
reference frame (the aether) as far as the laws of
electromagnetism were concerned. However, in
1904 Henri Poincare proposed his Principle of Relativity:
“The laws of physics are the same for a fixed observer
as for an observer who has a uniform motion of translation
relative to him”. Note that this principle applies
to mechanics as well as electromagnetism. Although
his principle acknowledged the futility in continued
use of the aether as an absolute reference frame,
Poincare did not fully grasp the implications.
Poincare still accepted the Newtonian concept of absolute
time. Einstein abandoned it.
Theory of Special Relativity
In 1905, Albert Einstein published his famous paper
entitled: “On the Electrodynamics of Moving Bodies”,
in which he proposed his two postulates of relativity
and from these derived his Special Relativity
Einstein’s postulates are:
1. The Principle of Relativity – All the laws of
physics are the same in all inertial reference frames
– no preferred inertial frame exists.
2. The Principle of the Constancy of the Speed of Light
– The speed of light in free space has the same value
c, in all inertial frames, regardless of the velocity
of the observer or the velocity of the source emitting
The significance of the
first postulate is that it extends Newtonian Relativity
to all the laws of physics not just mechanics.
It implies that all motion is relative – no absolute
reference frame exists. The significance of
the second postulate is that it denies the existence
of the aether and asserts that light moves at speed
c relative to all inertial observers. It also
predicts the null result of the Michelson-Morley experiment,
as the speed of light along both arms of the interferometer
will be c.
Perhaps the greatest
significance of the second postulate, however, is
that it forces us to re-think our understanding of
space and time. In Newtonian Relativity,
if a pulse of light were sent from one place to another,
different observers would agree on the time that the
journey took (since time is absolute), but would not
always agree on how far the light travelled (since
space is not absolute). Since the speed of light
is just the distance travelled divided by the time
taken, different observers would measure different
speeds for light. In Special Relativity,
however, all observers must agree on how fast light
travels. They still do not agree on the
distance the light has travelled, so they must therefore
now also disagree over the time it has taken.
In other words, Special Relativity put an end to
the idea of absolute time (2).
Clearly, since c must
remain constant, both space and time must be relative
Let us consider a “thought experiment” (Gedanken)
to illustrate that time is relative. Imagine
two observers O and O’ standing at the midpoints of
their respective trains (reference frames) T and T’.
T’ is moving at a constant speed v with respect
to T. Just at the instant when the two observers
O and O’ are directly opposite each other, two lightning
flashes (events) occur simultaneously in the T frame,
as shown below. The question is, will these
two events appear simultaneous in the T’ frame?
From our T reference frame, it is clear that observer
O’ in the T’ frame moves to the right during the time
the light is travelling to O’ from A’ and B’.
At the instant that O receives the light from A and
B, the light from B’ has already passed O’, whereas
the light from A’ has not yet reached O’. O’
will thus observe the light coming from B’ before
receiving the light from A’. Since the speed
of light along both paths O’A’ and O’B’ is c (according
to the second postulate), O’ must conclude that the
event at B’ occurred before the event at A’.
The two events are not simultaneous for O’, even though
they are for O.
Thus, we can conclude that two
events that are simultaneous to one observer are not
necessarily simultaneous to a second observer.
Moreover, since there is no preferred reference frame,
either description is equally valid. It follows
that simultaneity is not an absolute concept, but
depends on the reference frame of the observer.
When measuring the length of an object it is necessary
to be able to determine the exact position of the
ends of the object simultaneously. If, however,
observers in different reference frames may disagree
on the simultaneity of two events, they may also disagree
about the length of objects.
In fact, using Special Relativity theory, it is
possible to show mathematically and to demonstrate
experimentally that the length of a moving rod
appears to contract in the direction of motion relative
to a “stationary” observer. This is described
by the Lorentz-Fitzgerald Contraction Equation:
where l is the moving length, l0 is the rest length (or proper
length) and v is the velocity of the rod relative to
the stationary observer. Note that this contraction
takes place in the direction of motion only.
So, for example, an observer on earth watching a rectangular
spacecraft move past the earth in the horizontal plane
would observe the horizontal length of the craft to
be contracted but the vertical width of the craft
to remain the same as seen by the observer on the
rocket. (Note that this is an over simplification.
Three dimensional objects travelling at relativistic
speeds relative to a given reference frame will appear
to be distorted in other ways as well, to an observer
at rest in that frame. This is outside the scope
of this course.)
Let us consider another thought experiment.
Imagine a “light clock”, as shown below. Time
is measured by light bouncing between two mirrors.
This clock ticks once for one complete up and down
motion of the light.
The light clock is placed in a rocket that travels
to the right at a constant speed v with respect to a stationary observer
on earth. When viewed by an observer travelling
with the clock, the light follows the path shown in
(a) above. To the stationary observer on earth,
who sees the clock moving past at a constant speed,
the path appears as in (b) above.
From (a), the time taken for light to make one complete
trip up and down, t0, is
t0 = 2.L / c
Remember that this represents one tick or one second
on the light clock as seen by the observer moving
with the clock. From (b), the distance the light
moves between A and B is c.tAB, and the distance moved
by the whole clock in time tAB is v.
So, by Pythagoras’ Theorem:
(c.tAB )2 = (v.tAB
)2 + L2
tAB 2 = L2
/ (c2 – v2)
which can then be re-arranged (divide throughout
RHS by c2 and take the square root) to
tAB = (L/c)
/ Ö 1 – (v2/c2)
and thus, the total time taken by the light for
one complete up and down motion is:
tABC = (2L/c)
/ Ö 1 – (v2/c2)
But from (1) above:
t0 = 2.L / c
And so we have:
Clearly, the time interval corresponding
to one tick of the light clock is larger for the observer
on earth than for the observer on the rocket, since
the denominator on the RHS of the above equation is
always less than 1.
The above equation may be interpreted as meaning
that the time interval t for an event to occur, measured by an
observer moving with respect to a clock is longer
than the time interval t0 for the same event, measured
by an observer at rest with respect to the clock.
An alternative way of stating this is that clocks
moving relative to an observer are measured by that
observer to run more slowly than clocks at rest with
respect to that observer. That is, time in
a moving reference frame appears to go slower relative
to a “stationary” observer. This result is called
The time interval t0 is referred to as the proper
time. t0 is always the time for an
event as measured by the observer in the moving
reference frame (Ref 5, pp.63-64).
An example is probably a good idea
at this stage. Consider a rocket travelling
with a speed of 0.9c relative to the earth.
If an observer on the rocket records a time for a
particular event as 1 second on his clock, what time
interval would be recorded by the earth observer?
From our time dilation equation we have:
t = 1 / Ö 1 – [(0.9c)2/c2]
t = 2.29 s
So, to an observer on earth, the time taken for
the event is 2.29s. The earth observer sees
that the rocket clock has slowed down. It
is essential that you understand that this is not
an illusion. It makes no sense to ask which
of these times is the “real” time. Since no
preferred reference frame exists both times are as
real as each other. They are the real times
seen for the event by the respective observers.
Time dilation tells us that a moving clock runs
slower than a clock at rest by a factor of 1/Ö 1 – (v2/c2).
This result, however, can be generalised beyond clocks
to include all physical, biological and chemical processes.
The Theory of Relativity predicts that all such processes
occurring in a moving frame will slow down relative
to a stationary clock.
Try this java demonstration (page down when you
get there and click on the Time Dilation applet under
the Relativity section heading):
Evidence for Time Dilation (Not
The validity of time dilation has been confirmed
experimentally many times. One of these experiments
involves the study of the behaviour of particles called
muons, which are produced by collisions in the earth’s
upper atmosphere. When measured in their own
rest frame they have a lifetime of 2.2 ms. Their speed can reach as high
as 0.99c, which would enable them to travel about
650 m before decaying. Clearly, this distance
is not sufficient to allow the muons to reach the
surface of the earth and yet muons are found in plentiful
supply even in mine shafts beneath the earth’s surface.
The explanation is provided by time dilation.
The lifetime of muons with a speed of 0.99c is
dilated to about 16 ms in the earth’s reference frame.
This much time allows the muons to travel close to
5 km in the earth’s reference frame – sufficient to
reach the ground. (Remember though, if you could
think of a muon carrying a clock along with it, then
this clock would record the normal muon life span
of 2.2 ms.
2.2 ms of
moving-muon time is equivalent to 16 ms
of stationary earth observer's time.)
Paradox (Not Examinable)
The Twin Paradox is another example of a thought
experiment in relativity. Consider two twins.
Twin A takes a trip in a rocket ship at constant speed
v relative to the earth to a distant point in space
and then returns, again at the constant speed v.
Twin B remains on earth the whole time. According
to Twin B, the travelling twin will have aged less,
since his clock would have been running slowly relative
to Twin B’s clock and would therefore have recorded
less time than Twin B’s clock. However, since
no preferred reference frame exists, Twin A would
say that it is he who is at rest and that the earth
twin travels away from him and then returns.
Hence, Twin A will predict that time will pass more
slowly on earth, and hence the earth twin will be
the younger one when they are re-united. Since
they both cannot be right, we have a paradox.
To resolve the paradox we need to realise that it
arises because we assume that the twins’ situations
are symmetrical and interchangeable. On closer
examination we find that this assumption is not correct.
The results of Special Relativity can only be applied
by observers in inertial reference frames. Since
the earth is considered an inertial reference frame,
the prediction of Twin B should be reliable.
Twin A is only in an inertial frame whilst travelling
at constant velocity v. During the intervals
when the rocket ship accelerates, to speed up or slow
down, the reference frame of Twin A is non-inertial.
The predictions of the travelling twin based on Special
Relativity during these acceleration periods will
be incorrect. General Relativity can be used
to treat the periods of accelerated motion.
When this is done, it is found that the travelling
twin is indeed the younger one.
Note that the only way to tell whose clock has actually
been running slowly is to bring both clocks back together,
at rest on earth. It is then found that it
is the observer who goes on the round trip whose clock
has actually slowed down relative to the clock of
the observer who stayed at home.
and Space Travel
Time dilation and length contraction have raised
considerable interest in regard to space travel.
Consider the following thought experiment. Imagine
that adventurous Toni goes on an excursion to Alpha
Centauri in a space ship at 0.9c. Her friend
stays behind on earth. Candy knows that
a–Centauri is 4.3 light years away and
so calculates the time for the trip as 4.8 years.
Allowing for a brief stop over when Toni gets there (shopping, cappuccino & cake etc),
Candy expects that Toni will be
back in about 10 years.
Travelling at 0.9c, Toni measures the distance
between earth and a–Centauri
to be contracted to 1.87 light
years and thus calculates the time for the trip as
2.1 years. Thus, she expects to be back on earth
in a little over 4 years.
Clearly, this 2.1 years of rocket time must be equivalent
to 4.8 years of earth time, since both observers must
observe the laws of physics to be the same.
(Note: We are ignoring the brief periods of acceleration
required by Toni.) This equivalence can be
checked using the time dilation equation.
When Toni arrives back on earth she finds that
she has indeed aged a little over four years, whilst
poor Candy is nearly 10 years older than when she
left. (Perhaps the rare a–Centaurian
wolfhound that Toni has
bought for Candy will soothe the upset.)
Seriously, though, the closer v gets to c, the closer
the distance to a–Centauri
and the time required to get
there get to zero as seen by Toni. Obviously,
the minimum time for the journey as seen by Candy is 4.3 years. So, if
Toni travels the distance
in 1 s, then 1 s of her time is equivalent to 4.3
years of Candy's (earth) time. If Toni travelled for 1 hour at this
very high speed, (3600 x 4.3) years or 15480 years
would elapse on earth. If Toni travelled for
a whole year on the rocket at this high speed, 135
million years would pass on earth.
While time dilation and length contraction overcome
one of the great difficulties of space travel, problems
obviously remain in producing such high speeds.
and the Mass-Energy Relationship
Another aspect of the Special Relativity theory
is that the mass of a moving object is greater than
when it is stationary. In fact, the higher the
velocity of the object, the more massive it becomes.
This is called Mass Dilation and is represented
where m = relativistic mass of particle, m0 = rest mass of particle,
v is the velocity of the particle relative
to a stationary observer and c = speed of light.
The interesting question is of course, from where
does this extra mass come?
Using relativistic mechanics it can be shown that
the kinetic energy of a moving body makes a contribution
to the mass of the body. It turns out that
mass = energy/c2 or in a more recognizable
E = mc2
which is Einstein’s famous equation. Einstein
originally derived this equation by using the idea
that radiation exerts a pressure on an absorbing body.
This equation states the equivalence of mass
and energy. It establishes that energy can be
converted into mass and vice versa. For example,
when a particle and its antiparticle collide, all
the mass is converted into energy. Mass is converted
into energy in nuclear fission. When a body
gives off energy E in the form of radiation, its mass
decreases by an amount equal to E/c2.
In Special Relativity, the Law of Conservation of
Energy and the Law of Conservation of Mass have been
replaced by the Law of Conservation of Mass-Energy.
Note that the fact that mass increases as a body
gains velocity effectively limits all man-made objects
to travel at speeds lower than the speed of light.
The closer a body gets to the speed of light, the
more massive it becomes. The more massive it
becomes, the more energy that has to be used to give
it the same acceleration. To accelerate the
body up to the speed of light would require an infinite
amount of energy. Clearly, this places
a limit on both the speed that can be attained by
a spacecraft and therefore the time it takes to travel
from one point in space to another.
of E = mc2 (Not Examinable)
Einstein's famous mass-energy relationship E = mc2
has been verified experimentally on many occasions. The
results of the latest experimental verification were reported in the
4th March 2006 edition of New Scientist Magazine. Two groups
of scientists studied gamma-ray emission from radioactive sulfur and
silicon atoms. One group at MIT used a very high-precision
mass-measuring apparatus called a Penning Trap to measure the mass
of particles before and after gamma-ray emission. The other
group at the US National Institute of Standards and Technology (NIST)
used a high-precision spectrometer to measure the wavelength of each
emitted gamma-ray, and thus determine their energy. When each
group was completely confident in the accuracy of their results,
they faxed them to each other. Imagine the tension as they
awaited each other's results. In the end the groups showed
that E does indeed equal mc2
to better than 5 parts in 10 million.
Space Time (Not Examinable)
The Theory of Special Relativity shows that space
and time are not independent of one another but are
intimately connected. The theory shows that
objects moving at very high speeds relative to a stationary
observer appear contracted in the direction of motion
and have clocks that appear to run slow. It
seems that some space (length) has been exchanged
for some time. Although the length of the object
is shorter as seen by the stationary observer, each
of the object’s seconds is of longer duration than
each of the stationary observer’s seconds. The
two effects of time dilation and length contraction
balance each other, with space being exchanged for
time. Thus, in relativity it makes very good
sense to speak of space-time rather than space and
time as separate entities.
Such considerations have led to the concept of
four-dimensional space-time. In this four-dimensional
space-time, space and time can intermix with part
of one being exchanged for part of the other as the
reference frame is changed. In 1908, Hermann
Minkowski provided a mathematical treatment of
Special Relativity in which he developed relativistic
kinematics as a four-dimensional geometry (4).
He assigned three spatial coordinates and one time
coordinate to each event. Thus, in Special
Relativity, an event in space-time may be represented
on the Cartesian plane, by quoting its x, y, z and
t coordinates. The first three coordinates tell
where the event occurred; the fourth coordinate tells
when the event occurred. (See ref. 4 for a good
introduction to Minkowski Space-Time diagrams.)
Standard of Length
Length is one of the fundamental quantities
in Physics because its definition does not depend
on other physical quantities. The SI unit of
length, the metre was originally defined as one ten-millionth
of the distance from the equator to the geographic
North Pole (6). The first truly international
standard of length was a bar of platinum-iridium alloy
called the standard metre and kept in Paris.
The bar was supported mechanically in a prescribed
way and kept in an airtight cabinet at 0o
C. The distance between two fine lines engraved
on gold plugs near the ends of the bar was defined
to be one metre (7).
In 1961 an atomic standard of length was adopted
by international agreement. The metre was defined
to be 1 650 763.73 times the wavelength of the orange-red
light from the isotope krypton-86. This standard
had many advantages over the original – increased
precision in length measurements, greater accessibility
and greater invariability to list a few (7).
In 1983 the metre was re-defined
in terms of the speed of light in a vacuum.
The metre is now defined as the distance light travels
in a vacuum in 1/299792458 of a second as measured
by a cesium clock (2 & 6). Since the speed
of light is constant and we can measure time more
accurately than length, this standard provides increased
precision over previous standards. The reason
for that particular fraction (1/299792458) is that
the standard then corresponds to the historical definition
of the metre – the length on the bar in Paris.
So, our current standard
of length is actually defined in terms of time in
contrast to the original standard metre, which was
defined directly in terms of length (distance).
Hawking, S.W. & Israel, W. (1979).
General Relativity – An Einstein Centenary Survey,
Cambridge, Cambridge University Press
Hawking, S.W. (1988). A
Brief History of Time, London, Bantam Press
Whitrow, G.J. (1984). The Natural Philosophy of Time,
Oxford, Oxford University Press
Born, M. (1965). Einstein’s Theory of Relativity,
New York, Dover Publications Inc.
Resnick, R. (1968). Introduction To Special Relativity,
New York, Wiley
Bunn, D.J. (1990). Physics
for a Modern World, Sydney, The
Halliday, D. & Resnick, R. (1966). Physics Parts I
& II Combined Edition, New York, Wiley
SPACE TOPIC PROBLEMS II
Describe an experiment that you could perform
in a reference frame to determine whether or not the
frame was non-inertial.
A spacecraft is travelling at 0.99c.
An astronaut inside the craft records a time of
1 hour for a certain event to occur.
How long would an observer stationary relative
to the craft record for this event?
A missile travelling at 9/10 the speed of light
has a rest length of 10 m.
Calculate the length of the moving missile as
measured by a stationary observer directly under the
flight path of the missile.
An electron with a rest mass of 9.11 x 10-31
kg is travelling at 0.999c.
Determine the relativistic mass of the
x 10-29 kg)
A particular radioactive isotope loses 5 x 102
J of energy. Calculate
its resultant loss of mass.
(5.6 x 10-15 kg)
The radius of our galaxy is 3 x 1020
m, or about 3 x 104 light years.
Can a person, in principle, travel from the
centre to the edge of our galaxy in a normal lifetime?
Explain using either time dilation or length
Determine the constant velocity that would be
needed to make the journey in 30 years (proper time).
(299999850 m/s or 0.9999995c)
(Hint - What is the shortest time that a stationary
earth observer could possibly measure for such a
A new EFT (extremely fast train) is travelling
along the tracks at the speed of light relative to the
A passenger is walking towards the front of the
train at 5 m/s relative to the floor of the train.
Clearly, relative to the earth’s surface, the
passenger is moving faster than the speed of light.
Analyse this situation from the point of view
of Special Relativity.
to the first page of notes on Topic 9.2 Space | http://webs.mn.catholic.edu.au/physics/emery/hsc_space_continued.htm | 13 |
52 | This page answers questions about light. The questions are:
According to classical theories, particles and waves are two fundamentally different things. A wave has a wavelength and no well-defined location, and when different waves meet then they form interference patterns but keep going as if nothing happened. A particle does have a very well-defined position and no wavelength, and when different particles meet then they collide. Classical optics treats light and other electromagnetic waves as a wave phenomenon, and classical mechanics treats mass as made up of particles. The tacit assumption is that light is a wave phenomenon and therefore does not consist of separate particles, and that mass is made up of separate particles and therefore cannot show wave phenomena such as interference.
At the beginning of the 20th century it was discovered (through research by many people, including Albert Einstein) that this view is too simple, and that all things sometimes act as particles, and sometimes show wave phenomena. Light acts in some circumstances (such as in a solar cell) as a stream of separate particles which we now call photons, and particles such as electrons can under certain circumstances give interference patterns. In general, the particle characteristics of wave phenomena and the wave characteristics of particles are only important when you look at very small length scales, about as small as the wavelength or even smaller. At the by comparison very much greater length scales of daily life, the particle nature of light and the wave nature of particles are not important. These things are described by quantum mechanics.
At the very smallest length scales, all things have characteristics of particles. At those length scales, there turn out to be only a limited number of different kinds of particles, which are called elementary particles. A few of the better-known kinds of elementary particles are protons, neutrons, and electrons (which together form atoms and hence matter), photons (electromagnetic waves), neutrinos (of which there are at least three kinds), and gravitons (gravity waves). Protons and neutrons are made up of even smaller particles called quarks, but those cannot be found on their own.
Whether a group of similar elementary particles behaves at the length scales of daily life like a group of particles or like a wave depends on to which of two fundamental classes of elementary particles they belong. These two fundamental classes are called bosons and fermions, after Mr. Bose and Mr. Fermi who made important discoveries in this area. Protons, neutrons, electrons, and neutrinos are fermions, and photons and gravitons are bosons.
Bosons (such as photons) do not notice each other and can be at the same location at the same time without any problem. That's why different rays of light can cross each other without any problem and continue to go straight ahead as if nothing happened. Because bosons do not notice each other, it is very difficult to build a stable structure out of bosons.
Fermions (such as protons, neutrons, and electrons) do notice each other and cannot be at the same place at the same time. (Quantum fysicists say that different fermions cannot be in the same quantum state.) That's why two material things cannot just move through one another.
People have many characteristics by which you can tell the difference between them. Even identical twins can be told apart if you study them carefully enough, for example by individual hairs or scars or individual cells in their bodies. At that level, there are many differences. Describing a person completely so that an exact copy could be made to the smallest level would require a truly staggering amount of information, more than would fit in all the computers in the whole world. It is therefore no problem to ascribe a different personality to each human.
Photons, however, have only very few characteristics by which they may be distinguished from one another. They have a not quite fixed amount of energy, an uncertain location, and a not quite fixed direction, and no other characteristics at all in which they differ from each other. The vagueness of the energy, location, and direction are quantum mechanical effects that are only noticeable for very small things.
For humans, their location and direction is usually not seen as part of their personality, and if you treat photons the same, then only their energy is left as part of their personality, and even the value of that quantity is somewhat uncertain. All in all, it is not easy to ascribe an individual personality to a photon. If two photons with about the same energy at about the same location go in about the same direction, and if we measure one of those photons a little while later, then we cannot be sure which of the two we measured. Also, you can see a photon only once, when you measure it, and that measurement changes the photon, because you can only notice the photon if it changes something in your detector, and that must (through Newton's Law that for each action there is a reaction in the opposite direction) affect the photon itself. So, you cannot keep track of a photon all the time, and that makes it even more difficult to establish the identity of a photon.
Light consists of individual photons. A given photon can be seen by just one observer, because to be able to notice the photon a change must occur inside the eye of the observer, and that can only happen when the photon is absorbed by a light-sensitive cell in the retina in the eye. The observation of the photon makes the photon disappear. I think I read somewhere that a light-sensitive cell in the human eye needs at least about a dozen photons to arrive within a fraction of a second for the light to be noticed, so we cannot see individual photons. We can build instruments that can see individual photons.
The question whether photons are a theoretical device or reality is a philosophical question. How do you define the difference between a theoretical device and reality?
It is reasonable to assume the reality of photons because certain measurable things in the Universe can be accurately predicted from the assumed characteristics of photons. This suffices, and the reality of other things, such as atoms and cars and people, can be shown to be reasonable in the same way.
It is of course not a coincidence that photons seem to fit so well in reality; that is because the characteristics of photons were deduced from measurements of reality.
Einstein found that mass can be transformed into energy, and energy into mass. Gravity sees no difference between mass and energy, so something can feel gravity because it has mass or because it has energy, or both.
If something goes faster, then it has more energy, which corresponds to more mass, so you could say that the mass of something has two parts: one that is due to the extra energy that the thing has (such as energy of speed [kinetic energy] or heat − there are still more of them), and one that remains if you remove all of the extra energy (which means that the thing doesn't move at all and is as cold as it can be). That last part of the mass is called the rest mass: the mass that the thing would have if it were completely at rest. The rest mass does not depend on the state of the thing (on its speed or temperature, for example). For example, the rest mass of all electrons is the same, so you can put that in a table in a book. The "mass-with-all-energy-taken-into-account" is different for each electron, and can be different from one moment to the next, so you cannot put that one in a table in a book. If physicists or astronomers talk about mass, then they usually mean the rest mass.
It turns out that matter has (non-zero) rest mass, but light does not. If you take all extra energy out of light, then nothing at all remains. When you ask a physicist or astronomer if light has mass, then they can answer "no" if they mean rest mass, or "yes" if they take the extra energy into account.
Light and matter are the same in some respects (for example, both of them notice gravity), but are different in other respects. For example, light can easily travel through a piece of glass without bothering the glass, but matter cannot do that. So, it is still useful (even for scientists) to distinguish between light and matter.
Things such as light that have no rest mass cannot be put on a scale to be weighed, so they have no weight (but they do have mass, if you take the energy into account). Mass and weight are not the same. You don't feel weight when you're floating freely in space, but you then still have mass, otherwise the force of gravity would not keep pulling you down and/or keep changing the direction of your movement.
The direction of light can be changed in different ways:
The difference between reflection and scattering of light is not always easy to determine. The difference between refraction and deflection of light is also not easy to specify. If the deflection takes little time, then it looks just like refraction.
If a ray of light passes close to a massive object, then it is
deflected slightly by the gravity of that object. If the deflection
is small, then it is approximately equal to
φ ≈ 720 G M / π c2 r
G is the Universal Constant of Gravitation,
M the mass of the object that the ray passes,
c the speed of light, and
r the shortest
distance of the ray of light from the center of the object. If you
M in units of the mass of the Sun, and
r in kilometers, then
338.5/r. If you measure
M in units of the mass
of the Earth, and
r in kilometers, then
≈ 0.001/r. The greatest deflection that a ray of light
gets if it brushes past the Sun is 1.75 seconds of arc, and if it
passes close to the Earth 0.0006 seconds of arc. These are very small
The only alternatives to reflection of photons are absorption of photons and the transmission of photons (i.e., being transparent). I haven't heard that supercold materials become transparent, so I assume that supercold materials (with a temperature close to the absolute zero point) absorb and reflect photons just like warm materials, so that supercold things can be visible against a suitable background with a different color or brightness.
If you are somewhere where everything has the same temperature and there is no light from outside, then the surroundings are filled with (heat) radiation (including visible light) that appears equally strong from all directions, so then you can't see anything because there are no differences in color or brightness anymore.
So, you can only see contrasts if there are large temperature differences between the sources of light in your surroundings. Sunlight is sent into space at a temperature of about 6000 kelvin (degrees above absolute zero), which is much higher than the typical temperature in our environment (which is about 290 kelvin), and that is why things on Earth that the Sun shines on can show so much contrast (in color or brightness).
The heat radiation that things emit depends on the things' temperature. The higher the temperature is, the higher the average frequency (and lower the wavelength) of the radiation. We are much colder than stars, so we emit heat radiation at a much greater wavelength than stars, namely infrared radiation instead of visible light.
You only see the light that reaches your eyes, just like you only hear sounds that reach your ears or smell smells that reach your nose. In this regard, all senses are equal. Light in space behaves just like light on Earth, so in space, too, you only see an object if it emits or reflects light into your eyes. If no light falls on an object and if it does not emit light of its own, then that object is invisible to everybody.
Only those parts of the Earth or of a spacecraft that are lit are visible. If the Earth moves between the spacecraft and the Sun, then the spacecraft moves through the shadow of the Earth where it is night. Then, no sunlight reaches the spacecraft and it is dark (except where lamps burn in or on the spacecraft). From the spacecraft, you can then only see things on Earth that emit light themselves, such as city lights or gas flares on oil rigs. If the spacecraft gets close to the boundary between day and night again, then you can see a little sunlight at the edge of the Earth that is refracted by the atmosphere of the Earth, just like from the ground you can see the sky brightnening some time before sunrise and it doesn't get completely dark until some time after sunset.
The landings on the Moon were always done in places where it was daytime then, so the Sun shone down on those sites and the ground looked bright in photographs that the astronauts took.
So, it is not always dark when you're in space, because the Sun always shines. The Sun is hidden from your view only if you're in the shadow of a planet or moon or asteroid. The space probes that we send to the Moon or to the other planets are in sunshine for the whole trip, until they go into orbit around the Moon or the planet. When they're in orbit, then they can go into the shadow of the Moon or planet once in a while, and then they are in darkness.
If you're in deep space between the stars, then it is about as dark as it is on Earth in the middle of a clear night far away from city lights. You can then see all of the stars, so it is not completely dark, but all of the light from all of the stars put together is not enough to read a book by.
Scientists noticed in the 17th century that light has properties of a wave phenomenon. All other wave phenomena that they knew, such as water waves and sound waves, require a physical medium to travel through. Surely, something had to do the waving. For example, there are no water waves without water, and sound does not travel through the vacuum of empty space (so all movies in which you can hear a space ships travel through space are wrong). People therefore assumed that light also needs a medium to travel through, and that medium had to fill the whole Universe (because we can see the planets and stars) and but it could not give any friction (because otherwise the planets would have ground to a halt long ago because of the head wind from the medium). People started searching for proof of the unknown medium, which they called the aether.
The search for the aether turned out to be fruitless. Whatever was tried to catch the aether in some experiment, nothing provide convincing proof that the aether exists. The best-known of the experiments was the one by Michelson and Morley in 1887, who tried to very accurately detect the motion of the Earth relative to the aether, but found no trace of it. Many more of these kinds of experiments have been tried.
So, there are no indications that our current understanding of light (without an aether) is incomplete. If there were an aether with measurable influence on light, then you'd expect that our measurements of light would show unexplained deviations compared with our theory of light, which does not include the aether, but there are no such deviations. Of course, it is always possible that there are as yet unknown kinds of particles or other things in the Universe, but if there is yet a kind of undiscovered aether that fills the whole Universe, then it apparently has no influence on light, so it would not be the aether that people had been looking for for so long.
For more information about the search for the aether and about the Michelson-Morley experiment, you can go to http://galileoandeinstein.physics.virginia.edu/lectures/michelson.html.
Light that meets obstacles will be reflected or scattered by them. That means that part of the light is sent into different directions. And that is a good thing, because otherwise we could only see things that emit light of their own, such as the Sun and fire and lamps, but not the ground or a tree or a wall. If the light changes direction in the same way across a large area, then we call it refraction or reflection of light. If the light goes randomly in all kinds of directions, then we call it scattering of light.
How well scattering works depends on the size of the obstacles (particles) and on the wavelength of the light.
A cloud in the sky is made up of very many droplets of water, which are really small but yet much greater than the wavelength of the light. In such a case, all wavelengths (colors) are scattered equally well, and that's why such a cloud looks white.
Fog is like a cloud that lies on the ground, so it is also made up of very many very tiny drops of water. If you stand in a fog at night near a street light or lamp, then it seems as if part of the light doesn't come from the lamp but from the fog around the lamp. That light did in fact come from the lamp and was on its way in a direction very different from that towards your eye, but then bumped into a fog droplet and was redirected towards your eye after all. Because your eye can only see where the light came from last, it looks as if the light didn't come from the lamp but fromt the fog droplet instead.
If the diameter of the scattering particles is less than about one tenth of the wavelength of the light, then the scattering is much easier for smaller wavelengths than for longer wavelengths (namely, inversely proportional to the fourth power of the wavelength). This kind of scattering is called Rayleigh scattering, after mister Rayleigh who explained it. Blue light has a wavelength that is about half as large as that of red light, so blue light is scattered about 16 times easier than red light.
Atoms and gas molecules (like those in the air) and tiny dust and smoke particles are small enough that they can produce Rayleigh scattering of light. Part of each light ray that comes from the Sun and travels through the atmosphere is scattered, and this part is mainly the blue part. Blue will be lacking from light rays coming directly from the Sun to your eyes, because it was scattered into many different directions, so the Sun looks a bit yellow (because if you take away blue from white light then you get yellow). Of light rays that pass you by, some blue light is scattered in all directions, and also towards your eyes, and that light does not seem to come from the Sun but rather from the air molecules that scattered it towards you. That's why the sky looks blue.
In empty space (a vacuum), light (and other kinds of
electromagnetic radiation) always travels at the same speed, which is
therefore usually called the speed of light. That speed of
light is by definition exactly equal to 299,792,458 m/s and is usually
c in formulas. In terms of miles, this is
equal to 186,283 miles per second (rounded to the nearest whole
We do not know why light should travel through a vacuum at a constant speed, so you could say that we don't fully understand light yet. But how can you prove to someone else that you completely understand something, even if that other person would not understand your explanation of how it works? I think that the only way you can do that is to show that you can make correct predictions about the thing in all cases, because someone else can tell if your predictions come true, even if he doesn't understand where you get those predictions. However, we cannot possibly check all circumstances to make sure that our predictions are correct, so I do not think that we'll ever achieve ultimate understanding. But we can strive to get closer to it.
Our best models of how everything works still contain many "free parameters", like knobs and dials that have to be set very carefully to specific values in order for everything to turn out as it is, but of which we do not know why precisely those values are necessary. One objective of modern science is to reduce the number of these free parameters, by uncovering hidden logical patterns that explain why certain free parameters are in fact tied together, so that a smaller number of free parameters is enough to explain everything. The ultimate goal is to find a Grand Unified Theory or Theory of Everything, which contains no free parameters at all anymore (but does predict everything accurately). Then all values that seem to exist independently, such as the mass of the proton or the diameter of a hydrogen atom or the speed of light in a vacuum or the universal constant of gravity will turn out to be based entirely on dimensionless numbers such as pi and 4 and the square root of 2.
The current value of the speed of light in a vacuum is set by definition, but was the result of a long series of measurements and calculations. Ole Rømer was the first, in the year 1676, to find a reasonable value for the speed of light (expressed as "22 minutes to cross the orbit of the Earth", but the "true" value is just below 17 minutes). Hyppolyte Fizeau was the first, in 1849, to measure the speed of light in a laboratory experiment (and found a value that was about 4 percent too high). After that, the measurements got increasingly more accurate.
In ordinary life, speeds add up. If Ann walks 1 km/h faster than Burt and Burt walks 2 km/h faster than Clara, then Ann goes 1 + 2 = 3 km/h faster than Clara. The same does not hold for light (and other kinds of electromagnetic radiation). If Ann, Burt, and Clara walk in the sunshine and measure the speed of the sunlight, then all three of them get the same result, even though they are not all walking equally fast.
If light travels not through empty space but rather through a gas
(air) or a transparent fluid or solid, then the speed of the light in
that substance is less than the speed of light in empty space. The
ratio of the speed of light in empty space and the speed of light in a
substance is equal to the index of refraction of that substance. The
index of refraction of water is about 1.4 (depending on the color of
the light), so the speed of light in water is equal to about
c/1.4 which is about 0.7
c, which is
about 210,000,000 m/s or about 130,000 miles per second.
Here is the speed of light in a number of different units.
|1,079,252,849||km/h||kilometers per hour|
|670,618,310||mph||miles per hour|
|299,792,458||m/s||metesr per second|
|299,792.458||km/s||kilometers per second|
|186,282.864||mi/s||miles per second|
|63,241.08||AE/y||astronomical units per year|
|173.14||AE/d||astronomical units per day|
|7.481||earth circumferences per second|
|7.214||AE/h||astronomical units per hour|
|1||ly/y||lightyears per year|
|0.3066||pc/y||parsec per year|
|0.002004||AE/s||astronomical units per second|
To know how long light takes to travel 900 km you need only divide that distance by the speed of light, which is (see above) almost 300,000 km/s. So, 900 km takes light only 900/300,000 = 0.003 seconds to travel (in empty space). 900 km is equivalent to about 560 mi, so you can also calculate the time as 560/186,283 = 0.003 seconds.
Massless particles always travel at the speed of light, so they do not need to be accelerated to that speed. Light (photons) does not need to accelerate from zero to the speed of light, but goes at the speed of light right from the start. Other wave phenomena need no acceleration, either, but go at the appropriate wave speed right from the start.
A lightyear is the distance that light travels (through empty space)
in one year. A lightyear is not a period, but a
distance. The speed of light in empty space is a constant
(see above), but one can argue about the length of a year, if you want
to be very precise. We'll use the average Julian year of 365.25 days.
With this, a lightyear is equal to
299,792,458 * 60 * 60 * 24 *
365.25 = 9,460,730,472,580,800 m or about 9.5 million million
kilometers or 5.9 million million miles.
The speed of light can also be combined with shorter units of time such as days and seconds. Here is a list.
|lightsecond||299,792,458 m||186,283 mi|
|lightminute||17,987,547,480 m||11,176,972 mi|
|lighthour||1,079,252,848,800 m||670,618,310 mi|
|lightday||25,902,068,371,200 m||16,094,839,450 mi|
|lightyear||9,460,730,472,580,800 m||5,878,640,109,306 mi|
A halo is a circle of light (or part of such a circle) around the Sun or Moon. "Halo" is also used more generally to refer to any optical phenomenon involving the refraction of light by ice crystals. The most common kind of circular halo surrounds the source of light at a distance of about 22 degrees from the center of the source of light. The most common kind of optical phenomenon involving ice crystals (at least in my experience) are sundogs, which are patches of light (often with the colors of the rainbow in them) at about 22 degrees to the left or right of the Sun.
A halo is caused by sunlight or moonlight being refracted into your direction by small by ice crystals in the atmosphere. If you see a halo, then there must be small ice crystals in that direction in the atmosphere. The shape of the halo is determined by the orientation of the ice crystals in the air, and of course by the presence of ice crystals in the air. If there are no suitable ice crystals in the direction that is appropriate for a halo, then you won't see a halo in that direction.
For more information about halos, see http://www.sundog.clara.co.uk/halo/halosim.htm.
Some people can say that the Moon is brighter than the stars, and other people can say that the stars are brighter than the Moon. Both groups can be right, because they mean different things by "brightness".
How do you tell if one thing is brighter than another thing? You might measure how much light reaches you from both things, and see which amount is larger. If you do that, then the Moon is brighter than all stars combined. Or you might measure how much light leaves both things (in all directions), and see which amount is larger. If you do that, then any star is much brighter than the Moon.
Seen from Earth, the stars look less bright than the Moon does because the stars are very much farther away from us than the Moon is. If you put the Moon as far away as the stars, then the Moon would be so dim that you couldn't see it anymore even with the best telescope.
If you see a bright light in the sky, then you can't tell immediately whether that light was reflected or whether it came directly from the place where it was created, without any reflections along the way. So, there must be a way to refer to "a small part of the sky [all directions] from where light arrived here", and that is commonly called a source of light. If you want to discuss where that light was originally created, then you have to use more words.
This is similar to sources of other things. A "source of water" is a small area from where water flows, regardless of where that water originally came from. A "source of wisdom" is a person who says many wise things, even if some or all of those wise things were told to that person by someone else.
In common speech, moonlight is "light coming from the Moon", regardless of the ultimate source of that light. This is practical. It means that you can see the Moon, and that you have some light for your nightly activities. For those things it makes no difference whether the moonlight was created by the Moon or is actually reflected light from somewhere else, perhaps from the Sun.
It is entirely similar to the definition of sunlight as "light coming from the Sun". These definitions focus on the effects rather than the causes. You need such effects-based definitions before you can start thinking about the causes.
The first thing you do when you find yourself in unfamiliar surroundings is to give the important things names, so you can talk about them. After a while you start to understand how those things relate to each other, but that is no reason to throw away the names that you gave to them earlier. "Moonlight" is a name, not an explanation. Everybody recognizes moonlight when they see it, even if they don't have any clue about where that light was ultimately generated.
There is another definition of sunlight as "light that was generated by the Sun". That definition focuses on causes rather than effects. It depends on the situation which of those definitions (cause-based or effects-based) is the most appropriate. If the distinction is important to you in a particular situation, then you must clarify which definition you use.
It is not reasonable to insist that everybody use a cause-based definition unless you know what all of the causes are, and history shows that it is unwise to be too confident about the completeness of your knowledge.
If you insist on a cause-based definition, then the definition of moonlight as "light from the Sun reflected by the Moon" is incomplete. The Moon does in fact generate some visible light of its own, as all objects do that are above absolute zero temperature. The cause-based definition excludes that light generated by the Moon from the definition of moonlight, which seems silly.
Also, the Moon does not reflect just sunlight, but also starlight, which was certainly not generated by the Sun, so it should then be included in the definition of moonlight, too. If there were a second star nearby to rival the Sun, then its light would surely have been included in the cause-based definition of moonlight, so it seems illogical to define moonlight purely in terms of sunlight.
In addition, part of the moonlight is sunlight that was reflected more than once, but the above definition is not clear about whether such multiply-reflected sunlight is included. One example of such multiply-reflected sunlight is earthshine, which makes the dark part of the Moon facing the Earth visible within a few days from new moon.
And finally, the Moon is not merely a passive reflector of sunlight, but leaves its mark on the reflected light. It reflects certain colors better than others, so that a knowledgeable scientist can tell the difference between sunlight and moonlight by looking at the spectrum of the light.
I expect that most people are not aware of these different components of the light coming from the Moon (even if they know that most moonlight is reflected sunlight). And perhaps we'll discover a few more components as our knowledge of the Universe increases. Any cause-based definition may therefore turn out to be incomplete, but the effects-based definition remains valid.
languages: [en] [nl]
Last updated: 2012-10-18 | http://aa.quae.nl/en/antwoorden/licht.html | 13 |
58 | Now that we've seen several types of quadrilaterals that are parallelograms, let's learn about figures that do not have the properties of parallelograms. Recall that parallelograms were quadrilaterals whose opposite sides were parallel. In this section, we will look at quadrilaterals whose opposite sides may intersect at some point. The two types of quadrilaterals we will study are called trapezoids and kites. Let's begin our study by learning some properties of trapezoids.
Definition: A trapezoid is a quadrilateral with exactly one pair of parallel sides.
Since a trapezoid must have exactly one pair of parallel sides, we will need to prove that one pair of opposite sides is parallel and that the other is not in our two-column geometric proofs. If we forget to prove that one pair of opposite sides is not parallel, we do not eliminate the possibility that the quadrilateral is a parallelogram. Therefore, that step will be absolutely necessary when we work on different exercises involving trapezoids.
Before we dive right into our study of trapezoids, it will be necessary to learn the names of different parts of these quadrilaterals in order to be specific about its sides and angles. All trapezoids have two main parts: bases and legs. The opposite sides of a trapezoid that are parallel to each other are called bases. The remaining sides of the trapezoid, which intersect at some point if extended, are called the legs of the trapezoid.
The top and bottom sides of the trapezoid run parallel to each other, so they are the trapezoid's bases. The other sides of the trapezoid will intersect if extended, so they are the trapezoid's legs.
The segment that connects the midpoints of the legs of a trapezoid is called the midsegment. This segment's length is always equal to one-half the sum of the trapezoid's bases, or
Consider trapezoid ABCD shown below.
The midsegment, EF, which is shown in red, has a length of
The measurement of the midsegment is only dependent on the length of the trapezoid's bases. However, there is an important characteristic that some trapezoids have that is solely reliant on its legs. Let's look at these trapezoids now.
Definition: An isosceles trapezoid is a trapezoid whose legs are congruent.
By definition, as long as a quadrilateral has exactly one pair of parallel lines, then the quadrilateral is a trapezoid. The definition of an isosceles trapezoid adds another specification: the legs of the trapezoid have to be congruent.
ABCD is not an isosceles trapezoid because AD and BC are not congruent. Because EH and FG are congruent, trapezoid EFGH is an isosceles trapezoid.
There are several theorems we can use to help us prove that a trapezoid is isosceles. These properties are listed below.
(1) A trapezoid is isosceles if and only if the base angles are congruent.
(2) A trapezoid is isosceles if and only if the diagonals are congruent.
(3) If a trapezoid is isosceles, then its opposite angles are supplementary.
Definition: A kite is a quadrilateral with two distinct pairs of adjacent sides that are congruent.
Recall that parallelograms also had pairs of congruent sides. However, their congruent sides were always opposite sides. Kites have two pairs of congruent sides that meet at two different points. Let's look at the illustration below to help us see what a kite looks like.
Segment AB is adjacent and congruent to segment BC. Segments AD and CD are also adjacent and congruent.
Kites have a couple of properties that will help us identify them from other quadrilaterals.
(1) The diagonals of a kite meet at a right angle.
(2) Kites have exactly one pair of opposite angles that are congruent.
These two properties are illustrated in the diagram below.
Notice that a right angle is formed at the intersection of the diagonals, which is at point N. Also, we see that ?K??M. This is our only pair of congruent angles because ?J and ?L have different measures.
Let's practice doing some problems that require the use of the properties of trapezoids and kites we've just learned about.
Find the value of x in the trapezoid below.
Because we have been given the lengths of the bases of the trapezoid, we can figure out what the length of the midsegment should be. Let's use the formula we have been given for the midsegment to figure it out. (Remember, it is one-half the sum of the bases.)
So, now that we know that the midsegment's length is 24, we can go ahead and set 24 equal to 5x-1. The variable is solvable now:
Find the value of y in the isosceles trapezoid below.
In the figure, we have only been given the measure of one angle, so we must be able to deduce more information based on this one item. Because the quadrilateral is an isosceles trapezoid, we know that the base angles are congruent. This means that ?A also has a measure of 64°.
Now, let's figure out what the sum of ?A and ?P is:
Together they have a total of 128°. Recall by the Polygon Interior Angle Sum Theorem that a quadrilateral's interior angles must be 360°. So, let's try to use this in a way that will help us determine the measure of ?R. First, let's sum up all the angles and set it equal to 360°.
Now, we see that the sum of ?T and ?R is 232°. Because segment TR is the other base of trapezoid TRAP, we know that the angles at points T and R must be congruent to each other. Thus, if we define the measures of ?T and ?R by variable x, we have
This value means that the measure of ?T and ?R is 116°. Finally, we can set 116 equal to the expression shown in ?R to determine the value of y. We have
So, we get x=9.
While the method above was an in-depth way to solve the exercise, we could have also just used the property that opposite angles of isosceles trapezoids are supplementary. Solving in this way is much quicker, as we only have to find what the supplement of a 64° angle is. We get
Once we get to this point in our problem, we just set 116 equal to 4(3y+2) and solve as we did before.
After reading the problem, we see that we have been given a limited amount of information and want to conclude that quadrilateral DEFG is a kite. Notice that EF and GF are congruent, so if we can find a way to prove that DE and DG are congruent, it would give us two distinct pairs of adjacent sides that are congruent, which is the definition of a kite.
We have also been given that ?EFD and ?GFD are congruent. We learned several triangle congruence theorems in the past that might be applicable in this situation if we can just find another side or angle that are congruent.
Since segment DF makes up a side of ?DEF and ?DGF, we can use the reflexive property to say that it is congruent to itself. Thus, we have two congruent triangles by the SAS Postulate.
Next, we can say that segments DE and DG are congruent because corresponding parts of congruent triangles are congruent. Our new illustration is shown below.
We conclude that DEFG is a kite because it has two distinct pairs of adjacent sides that are congruent. The two-column geometric proof for this exercise is shown below. | http://www.wyzant.com/help/math/geometry/quadrilaterals/trapezoids_and_kites | 13 |
65 | SOLVING LINEAR EQUATIONS
AN EQUATION is an algebraic statement in which the verb is "equals" = . An equation involves an unknown number, typically called x. Here is a simple example:
x + 64 = 100.
"Some number, plus 64, equals 100."
We say that an equation has two sides: the left side, x + 64, and the right side, 100.
In what we call a linear equation, x appears only to the first power, as in the equation above. A linear equation is also called an equation of the first degree.
The degree of any equation is the highest exponent that appears on the unknown number. An equation of the first degree is called linear because, as we will see much later, its graph is a straight line.
Now, the statement -- the equation -- will become true only when the unknown has a certain value, which we call the solution to the equation.
We can find the solution to that equation simply by subtracting:
36 is the only value for which the statement "x + 64 = 100" will be true. We say that x = 36 satisfies the equation.
Now, algebra depends on how things look. As far as how things look, then, we will know that we have solved an equation when we have isolated x on the left.
Why the left? Because that's how we read, from left to right. "x equals . . ."
In the standard form of a linear equation -- ax + b = 0 -- x appears on the left.
In fact, we are about to see that for any equation that looks like this:
There are two pairs of inverse operations. Addition and subtraction, multiplication and division.
Formally, to solve an equation we must isolate the unknown -- typically x -- on the left.
ax − b + c = d.
We must get a, b, c over to the right, so that x alone is on the left.
The question is:
How do we shift a number from one side of an equation
By writing it on the other side with the inverse operation.
For, on the one hand, that preserves the arithmetical relationship between addition and subtraction:
36 + 64 = 100 implies 36 = 100 − 64;
and on the other, between multiplication and division:
Algebra is, after all, abstracted -- drawn from -- arithmetic.
And so, to solve this equation:
We have solved the equation.
The four forms of equations
Solving any linear equation, then, will fall into four forms, corresponding to the four operations of arithmetic. The following are the basic rules for solving any linear equation. In each case, we will shift a to the other side.
1. If x + a = b, then x = b − a.
"If a number is added on one side of an equation,
2. If x − a = b, then x = b + a.
"If a number is subtracted on one side of an equation,
"If a number multiplies one side of an equation,
"If a number divides one side of an equation,
In every case, a was shifted to the other side by means of the inverse operation. Every linear equation can be solved by combining those four formal rules.
Solving each form can also be justified algebraically by appealing to the Two rules for equations, Lesson 6.
When the operations are addition or subtraction (Forms 1 and 2), we call that transposing.
We may shift a term to the other side of an equation
+ a goes to the other side as − a.
− a goes to the other side as + a.
Transposing is one of the most characteristic operations of algebra, and it is thought to be the meaning of the word algebra, which is of Arabic origin. (Arabic mathematicians learned algebra in India, from where they introduced it into Europe.) Transposing is the technique of those who actually use algebra in science and mathematics -- because it is skillful. And as we are about to see, it maintains the clear, logical sequence of statements. Moreover, it emphasizes that we do algebra with our eyes. When you see
The way that is often taught these days, is to add −a to both sides, draw a line, and add:
While that is logically correct (Lesson 6), it is clumsy,
What, after all, is the purpose of it? The purpose is to
A logical sequence of statements
In an algebraic sentence, the verb is typically the equal sign = .
ax − b + c = d.
That sentence -- that statement -- will logically imply other statements. Let us follow the logical sequence that leads to the final statement, which is the solution.
The original equation (1) is "transformed" by first transposing the terms (Lesson 1). Statement (1) implies statement (2).
That statement is then transformed by dividing by a. Statement (2) implies statement (3), which is the solution.
Thus we solve an equation by transforming it -- changing its form -- statement by statement, line by line according to the rules of algebra, until x finally is isolated on the left. That is how books on mathematics are written (but unfortunately not books that teach algebra!). Each line is its own readable statement that follows from the line above -- with no crossings out
In other words, What is a calculation? It is a discrete transformation of symbols. In arithmetic we transform "19 + 5" into "24". In algebra we transform "x + a = b" into "x = b − a."
Problem 1. Write the logical sequence of statements that will solve this equation for x :
abcx − d + e − f = 0
To see the answer, pass your mouse from left to right
First, transpose the terms. Line (2).
It is not necessary to write the term 0 on the right.
Then divide by the coefficient of x.
Problem 2. Write the logical sequence of statements that will solve this equation for x :
Problem 3. Solve for x : (p − q)x + r = s
Problem 4. Solve for x : ab(c + d)x − e + f = 0
Problem 5. Solve for x : 2x + 1= 0
x = −½
That equation, incidentally, is in the standard form, namely ax + b = 0.
Each of these problems illustrates doing algebra with your eyes. The student should see the solution immediately. In the example above, you should see that b will go to the other side as −b, and that a will divide.
That is skill in algebra.
Problem 7. Solve for x : ax = 0 (a0).
Now, when the product of two numbers is 0, then at least one of them must be 0. (Lesson 5.) Therefore, any equation with that form has the solution,
x = 0.
We could solve that formally, of course, by dividing by a.
Problem 8. Solve for x :
Problem 9. Write the sequence of statements that will solve this equation:
When we go from line (1) to line (2), −x remains on the left. For, the terms in line (1) are 6 and −x.
We have "solved" the equation when we have isolated x -- not −x -- on the left. Therefore we go from line (3) to line (4) by changing the signs on both sides. (Lesson 6.)
Alternatively, we could have eliminated −x on the left by changing all the signs immediately:
Problem 11. Solve for x :
Problem 12. Solve for x:
(Hint: Compare Lesson 5, Problem 18.)
x = 5.
Transposing versus exchanging sides
We can easily solve this -- in one line -- simply by transposing x to the left, and what is on the left, to the right:
x = c −a − b.
In this Example, +x is on the right. Since we want +x on the left, we can achieve that by exchanging sides:
c + x = a + b
Note: When we exchange sides, no signs change.
The solution easily follows:
c + x = a + b − c
In summary, when −x is on the right, it is skillful simply to transpose it. But when +x is on the right, we may exchange the sides.
Problem 13. Solve for x :
Problem 14. Solve for x :
Problem 15. Solve for x :
Problem 16. Solve for x :
Please make a donation to keep TheMathPage online.
Copyright © 2012 Lawrence Spector
Questions or comments? | http://www.themathpage.com/alg/equations.htm | 13 |
64 | 1. General Definition : If to every value (Considered as real unless other-wise stated) of a variable x, which belongs to some collection (Set) A, there corresponds one and only one finite value of the quantity y, then y is said to be a function (Single valued) of x or a dependent variable defined on the set A; x is the argument or independent variable.
If to every value of x belonging to some set A there corresponds one or several values of the variable y, then y is called a multiple valued function of x defined on A. Conventionally the word ''Function'' is used only as the meaning of a single valued function, if not otherwise stated. Pictorially: y is called the image of x and x is the pre-image of y under f. Every function from satisfies the following conditions. a) b) and c)
2. Domain, Co-Domain & Range Of a Function : Let , then the set A is known as the domain of f and the set B is known as co-domain of f. The set of all ''f'' images of elements of A is known as the range of f. Thus : Domain of Range of
It should be noted that range is a subset of co-domain. Sometimes if only f(x) is given then domain is set of those values of ''x'' for which f(x) exists or is defined. To find the range of a function, there is n''t any particular approach, but student will find one of these approaches useful. i) When a function is given in the form y = f(x), express if possible ''x'' as a function of ''y'' i.e. x = g(y). Find the domain of ''g''. This will become range of ''f''. ii) If y = f(x) is a continuous or piece-wise continuous function, then range of ''f'' will be union of [Minmf(x), Maxmf(x)] in all such intervals where f(x) is continuous/piece-wise continuous.
3. Classification of Functions : Functions can be classified into two categories : i) One-One Function (Injective mapping) or Many - One Function : A function is said to be a one-one function or injective mapping if different elements of A have different f images in B. Thus for .
Diagramatically an injective mapping can be shown as
OR Note : (a) Any function which is entirely increasing or decreasing in whole domain, then f(x) is one-one. (b) If any line parallel to x-axis cuts the graph of the function atmost at one point, then the function is one-one. Many - One Function : A function is said to be a many one function if two or more elements of A have the same f image in B. Thus is many one if for : but
Diagramatically a many one mapping can be shown as
Note : (a) Any continuous function which has atleast one local maximum or local minimum, then f(x) is many-one. In other words, if there is even a single line parallel to x- axis cuts he graph of the function atleast at two points, then f is many - one. (b) If a function is one-one, it cannot be many-one and vice versa. (c) All functions can be categorized as one-one or many-one
ii)Onto function (Surjective mapping) or into function : If the function is such that each element in B (co-domain) must have atleast one pre-image in A, then we say that f is a function of A ''onto'' B. Thus is surjective iff some such that f(a) = b. Diagramatically surjective mapping can be shown as
OR Note that : If range = Co-domain, then f(x) is onto.
Into Function : If is such that there exists atleast one element in co-domain which is not the image of any element in domain, then f(x) is into. Diagramatically into function can be shown as
Note that : If a function is onto, it cannot be into and vice versa. Thus a function can be one of these four types : a) one-one onto (injective and surjective) b) one-one into (injective and surjective) c) many - one onto (surjective but not injective) d) many-one into (neither surjective nor injective) ( domain in each case is ) Note : a) If f is both injective and surjective, then it is called a Bijective mapping. The bijective functions are also named as invertible, non-singular or biuniform functions. b) If a set A contains n distinct elements then the number of different functions defined from is nnand out of it n! are one one.
4. Algebraic Operations On Functions : If f& g are real valued functions of x with domain set A, B respectively, then both f and g are defined in . Now we define f + g, f - g, (f.g) and (f/g) as follows: i) ii) (f.g) (x) = f(x). g(x) iii) domain is | http://www.goiit.com/posts/show/0/content-class-12-functions-theory-903691.htm | 13 |
153 | Area is the measure of the amount of surface covered by something. Area formulas for different shapes are sometimes different, but for the most part, area is calculated by multiplying length times width. This is used when calculating area of squares and rectangles. Once you have the number answer to the problem, you need to figure out the units. When calculating area, you will take the units given in the problem (feet, yards, etc) and square them, so your unit measure would be in square feet (ft.2) (or whatever measure they gave you).
Let’s try an example. Nancy has a vegetable garden that is 6 feet long and 4 feet wide. It looks like this:
Nancy wants to cover the ground with fresh dirt. How many square feet of dirt would she need?
We know that an answer in square feet would require us to calculate the area. In order to calculate the area of a rectangle, we multiply the length times the width. So, we have 6 x 4, which is 24. Therefore, the area (and amount of dirt Nancy would need) is 24 square feet.
Let’s try that one more time. Zachary has a wall that he would like to paint. The wall is 10 feet wide and 16 feet long. It looks like this:
Sometimes, you will be given either the area or the perimeter in a problem and you will be asked to calculate the value you are not given. For example, you may be given the perimeter and be asked to calculate area; or, you may be given the area and be asked to calculate the perimeter. Let’s go through a few examples of what this would look like:
Valery has a large, square room that she wants to have carpeted. She knows that the perimeter of the room is 100 feet, but the carpet company wants to know the area. She knows that she can use the perimeter to calculate the area.
What is the area of her room?
We know that all four sides of a square are equal. Therefore, in order to find the length of each side, we would divide the perimeter by 4. We would do this because we know a square has four sides, and they are each the same length and we want the division to be equal. So, we do our division—100 divided by 4—and get 25 as our answer. 25 is the length of each side of the room. Now, we just have to figure out the area. We know that the area of a square is length times width, and since all sides of a square are the same, we would multiply 25 x 25, which is 625. Thus, she would be carpeting 625 square feet.
Now let’s see how we would work with area to figure out perimeter. Let’s say that John has a square sandbox with an area of 100 square feet. He wants to put a short fence around his sandbox, but in order to figure out how much fence material he should buy, he needs to know the perimeter. He knows that he can figure out the perimeter by using the area.
What is the perimeter of his sandbox?
We know that the area of a square is length times width. In the case of squares, these two numbers are the same. Therefore, we need to think, what number times itself gives us 100? We know that 10 x 10 = 100, so we know that 10 is the length of one side of the sandbox. Now, we just need to find the perimeter. We know that perimeter is calculated by adding together the lengths of all the sides. Therefore, we have 10 + 10 + 10 + 10 = 40 (or, 10 x 4 = 40), so we know that our perimeter is 40 ft. John would need to buy 40 feet of fencing material to make it all the way around his garden.
So far, we have been calculating area and perimeter after having been given the length and the width of a square or rectangle. Sometimes, however, you will be given the total perimeter, and a ratio of one side to the other, and be expected to set up an algebraic equation (using variables) in order to solve the problem. We’ll show you how to set this up so that you can be successful in solving these types of problems.
Eleanor has a room that is not square. The length of the room is five feet more than the width of the room. The total perimeter of the room is 50 ft. Eleanor wants to tile the floor of the room. How many square feet (ft 2) will she be tiling?
In this problem, we will be calculating area, but first we’re going to use the perimeter to figure out the length and width of the room.
First, we have to assign variables to each side of the rectangle. X is the most often used variable, but you can pick any letter of the alphabet that you’d like to use. For now, we’ll just keep things simple and use x. To assign a variable to a side, you first need to figure out which side they give you the least information about. In this problem, it says the length is five feet longer than the width. That means that you have no information about the width, but you do have information about the length based on the width. Therefore, you’re going to call the width (the side with the least information) x. Now, the width = x, and x simply stands for a number you don’t know yet. Now, you can assign a variable to the length. We can’t call the length x, because we already named the width x, and we know that these two measurements are not equal. However, the problem said that the length is five feet longer than the width. Therefore, whatever the width (x) is, we need to add 5 to get the length. So, we’re going to call the length x + 5.
Now that we’ve named each side, we can say that width = x, and length = x + 5. Here’s a picture of what this would look like:
Next, we need to set up an equation using these variables and the perimeter in order to figure out the length of each side. Remember, when calculating perimeter you add all four sides together. Our equation is going to look the same way, just with x’s instead of numbers. So, our equation looks like this:
x + x + x + 5 + x + 5 = 50
Now, we need to make this look more like an equation we can solve. Our first step is to combine like terms, which simply means to add all the x’s together, and then add the whole numbers together (for more help on this, see Combining Like Terms).
Once we combine like terms, our equation looks like this:
4x + 10 = 50
Next, we follow the steps for solving equations. (For additional help with this, see Solving Equations). We subtract 10 from each side of the equation, which leaves us with the following:
4x = 40
Now, we have to get x by itself, which means getting rid of the 4. In order to do this, we need to perform the opposite operation of what’s in the equation. So, since 4x means multiplication, we need to divide by 4 to get x alone. But remember, what we do to one side, we have to do to the other side. After dividing each side by 4, we get:
x = 10
Next, we have to interpret what this means. We look back and recall that we named the width x, so the width is 10. Now, we need to figure out the length. We named the length x + 5, so that means we have to substitute 10 in for x, and complete the addition. Therefore, we have 10 + 5, which gives us 15. So, our length is 15.
Now, we need to look back and remember that the problem asked us to calculate the area of the floor that Eleanor will be tiling. We know that in order to calculate area, we need to multiply the length times the width. We now have both the length and the width, so we simply set up a multiplication problem, like this: 10 x 15 = ? We multiply the two numbers together, and get 150.
Thus, your final answer is 150 ft 2.
Now, we’ll give you several practice problems so that you can try calculating area and perimeter on your own.
1. Leah has a flower garden that is 4 meters long and 2 meters wide. Leah would like to put bricks around the garden, but she needs to know the perimeter of the garden before she buys the bricks.
2. David has a rug that is square, and the length of one side is 5 feet. He has an open floor space in his living room that is 36 square feet.
3. Debbie has pool in her back yard that has a perimeter of 64 feet. The length of the pool is 2 feet longer than the width. Debbie wants to buy a cover for the pool, and needs to know how many square feet she needs to cover.
When we combine like terms, we would get 4x + 4 = 64. Then, we would continue solving normally by subtracting 4 from each side, so the equation would simplify to 4x = 60. Lastly, we would divide each side by 4 (to get x by itself) and we would reach the conclusion x = 15. Thus, the width of the pool is 15 ft. However, the length is 2 more than the width, which means we would have to add 2 to the width; 2 + 15 = 17. Now, we have the length (17) and the width (15). The question asked for the area (square feet) of Debbie’s pool. To find area, you need to multiply the length times the width. Therefore, you would multiply 17 x 15, which gives us 255. Thus, Debbie would need a cover for her pool that is 255 feet2.
4. Hector is planting a square garden in front of his house. He wants to plant carrots in the garden. He knows he can plant the carrots one foot apart. He has six feet across his yard (length) and he can plant carrots four feet deep (width).
5. Amanda is building a house, and she’s trying to calculate the area of her bedroom. She knows that the living room is 22 feet long and 20 feet wide. She was told that her bedroom should be half of the area of the living room. | http://www.wyzant.com/help/math/elementary_math/area_and_perimeter/area | 13 |
181 | Table of Contents
- Physical Processes
- Dust Source Regions
- Synoptically Forced Dust Storms
- Dust Storms Caused by Mesoscale Systems
- Satellite Detection of Dust
- Forecasting Dust Storms
- Case: Long-Range
- Case: Medium-Range
- Case: Short-Range
- Dust Onset
- Why Dust Model Forecasts Differ
Dust Storm Hazards
There are countless examples of how dust storms can impact military and civilian life. One such example occurred on the evening of 24 April 1980, when helicopters en route to rescue U.S. hostages in Iran encountered a haboob (that's a dust storm generated by a convective downburst). Unable to fly through it at night, several helicopters turned back, leading to the mission's termination. Dust raised by one of the withdrawing helicopters obscured visibility, leading to a collision with a C-130 transport plane, in which eight servicemen lost their lives.
On 12 August 2005, blowing dust in the U.S. state of Washington led to several chain reaction accidents. More than 50 cars and trucks were involved and seven people were killed.
Intense dust storms reduce visibility to near zero in and near source regions, with visibility improving away from the source. From the edge of blowing dust to within 150 miles (241 km) downstream, visibility can range from ½ to three miles (800 to 4800 meters).
Dust settles when winds drop below the speed necessary to carry the particles, but some level of dust haze will persist for longer periods of time. For example, dust haze may remain at four to six miles downstream (5000 to 9000 meters) for days after a dust storm.
Note that air-to-ground or slant-range visibility is more reduced than surface visibility. This may make it impossible to, for example, pick out an airfield from above, even when the reported horizontal surface visibility is three miles (or about 5 km) or more.
Dust hazards include more than just limits on visibility. Take the case of dust from Africa. Every year, wind carries hundreds of millions of tons of dust from the Sahara and Sahel across the Atlantic to the Caribbean and southeastern United States. The dust transports various microorganisms and chemicals that latch onto the small particles.
These dust storms have increased in frequency and intensity since the 1970s, with many ill effects.
- In the Caribbean, roughly 30% of the bacteria isolated from airborne soil dust are known pathogens, able to infect plants, animals, and people
- Caribbean and Florida coral reefs have been declining since the late 1970s
- The incidence of asthma in Barbados has increased 17-fold since 1973
- Dust events correlate with toxic red tides off the coast of Florida
- Fungal outbreaks affecting commercial crops have occurred within days of dust events
Clearly, airborne dust can carry a potentially toxic brew. The average soil particle size within a dust cloud decreases as the cloud travels and the larger particles settle. Eventually, the remaining particles become so small that our lungs cannot readily expel them. Accurate dust forecasts may allow people to take protective actions to mitigate the health effects of airborne particulates.
Dust also impacts equipment such as aircraft and automobile engines and electro-optical systems.
Before you can forecast dust storms, it's important to understand basic information about them such as
- The various type of dust sources (some of which are shown in the montage)
- The characteristics of dust particles, particularly their size
- The process of dust storm initiation, transport, and dissipation
This module will help you understand dust storm processes and more accurately forecast dust storms. We'll look at examples of dust storms from around the world, focusing primarily on Southwest Asia and the Middle East. Regardless of location, though, the lessons can be applied to any region of the world given some knowledge of local climatology and dust source regions.
Dust moves through several processes, described below. The latter two are integral to the formation of dust storms since they loft dust into the air.
- By saltation, where small particles move forward through a series of jumps or skips, like a game of leap-frog. The particles are lifted into the air, drifting approximately four times farther downwind than the height that they attain above ground. If saltating particles return to the ground and hit other particles, they jump up and forward, continuing the process.
- By creep, where sediment moves along the ground by rolling and sliding. Large particles and/or light winds favor creep.
- By suspension, where sediment materials are lifted into the air and held aloft by winds. If the particles are sufficiently small and the upward air currents are strong enough to support the weight of the individual grains, they will remain aloft. The larger particles settle more quickly, although increases in wind speed keep progressively larger particles aloft. Note that strong winds can lift suspended dust particles thousands of meters upward and thousands of kilometers downwind, with turbulent eddies and updrafts holding them in suspension.
Particle Size and Settling Velocity
Dust particles remain suspended in the air when upward currents are greater than the speed at which the particles fall through air. This graphic shows the fall speed, or settling velocity, as a function of particle size.
Dust particle size is usually measured in micrometers, which are 1/1000 of a millimeter or 1/1,000,000 of a meter. Particles capable of traveling great distances usually have diameters less than 20 micrometers (much smaller than the width of a human hair).
Of the following types of particles, which fit this description? (Choose all that apply.)
The correct answers are a) and b).
Appropriate source regions for dust storms have fine-grained soils rich in clay and silt.
Returning to the graph, dust particles fall at a speed of about 100 millimeters/second or roughly four inches per second. Particles larger than 20 micrometers in diameter fall disproportionately faster: 50-micrometer particles fall at about 500 mm/s or half a meter per second. Particles smaller than 20 micrometers settle very slowly. Ten-micrometer particles fall at only 30 millimeters/second while 2-micrometer particles fall at only 1 millimeter per second. The finest clay particles settle so slowly that they can be transported across oceans without settling.
Sources of Dust: Desert
Precipitation binds soil particles together and promotes plant growth. Plant growth, in turn, binds the soil even more and shields the surface from wind. Consequently, dust storms occur in regions with little vegetation and precipitation. These conditions most often occur in deserts—when it hasn't rained recently. The rule of thumb is that dust is unlikely within 24 to 36 hours of a rainstorm.
A thin veneer of stones called desert pavement covers many desert regions. This veneer results from the process of deflation where wind removes the finer-grained material, leaving only stones on the surface, which suppress blowing dust. If the pavement is disrupted by human activities, such as farming or off-road driving, the fine-grained material will be exposed to the wind again, raising the likelihood of dust storms. Studies show that large-scale military operations in the desert increase the likelihood of dust storms at least five-fold.
When seasonal rains occur over desert and near-desert environments, runoff water can create flash floods. The resulting erosion washes soil particles downstream. This continues until the velocity of the water slows to a point where it can no longer carry the load of sediment. The heaviest particles are deposited first, the lightest particles last. Once the water evaporates, the stream bed becomes a prime source for blowing dust.
Other Dust Sources
This montage shows other sources of dust. Are you surprised to see ocean sediments and glacial deposits in the list? Click each image to view more information on that topic or simply scroll down to view all.
Agricultural areas: Agricultural land that’s fallow, recently tilled, or has a marginal growing climate is a potential source area for dust. The mechanical breaking of soil creates an environment rich in fine-grained soil that is picked up and moved by seasonal winds.
We see this in the grain belts of northern Syria and Iraq, where seasonal rain is relied upon to water the crops. When drought occurs, the area becomes an active dust source region.
The same occurs in Colorado, New Mexico, and western Texas. An extreme example occurred in the American mid-west during the 1930s, known as the Dust Bowl.
Coastal areas: These MODIS images show well-defined dust plumes extending from the coastal area of the United Arab Emirates near Abu Dhabi. The dust plumes were generated by prefrontal southerly winds in advance of a cold front to the north.
River flood plains (alluvial plains): The flood plain of the Tigris and Euphrates Rivers in southern Iraq serves as the source region for many dust storms, particularly during shamal events. (A shamal is a northwesterly wind that blows over Iraq and the Persian Gulf states. It is often strong during the day and decreases at night.)
While river channels carry fairly sandy sediment, they deposit mud in the flood plain when they rise and flood. When the area dries out and desertifies, the rapid evaporation results in the formation of a salty white crust.
Ocean sediments: Ancient ocean sediments in the Baja California peninsula are the source region for the prominent dust plumes in this SeaWiFS image. The desiccated sediments were once a muddy sea floor that was lifted up above sea level.
Glacial deposits: Dust storms occur outside of the world’s deserts. This satellite image shows a dust storm blowing out from Iceland's southern coast. The bright white areas are glaciers. The melt water that emerges from beneath them carries a tremendous load of pulverized rock, or glacial flour. This material gets deposited on large mud flats referred to as outwash plains. The harsh climate and constantly shifting channels prevent vegetation from becoming established. During dry periods, the dust is picked up and carried offshore by high winds associated with storms in the North Atlantic. Similar glacial deposits can be found at high latitudes or high elevations around the world. For example, ancient deposits of wind-blown glacial flour, referred to as loess, fuel the prodigious dust storms of the Gobi desert in northwest China.
Dry lake beds: Dry lake beds are called playas in the U.S. and sabkhas in the Middle East. They arise as water erodes rocks and forms fine-grained soils. The erosion can occur over long periods of time or can happen quickly as a result of recent precipitation events.
When lakes dry up, the fine-grained deposits inhibit plant growth, which further contributes to dust availability. The salty deposits tend to be much lighter in color than the surrounding ground on satellite imagery, making them easy to detect.
The MODIS true color image above shows long plumes of dust coming off of dry lake beds and dry wetlands in the Sistan Basin. Straddling southern Afghanistan and eastern Iran, it’s one of the world’s driest basins. High-resolution (250-m) images like this show that entire flood plains, dried lakes, and agricultural areas do not erode. Rather, numerous small point sources with diameters of one to tens of kilometers erode to produce numerous individual dust plumes. It is these individual plumes that merge downstream to form mesoscale dust clouds and dust fronts.
Point Sources of Dust
As we've seen, most dust comes from a number of discrete areas that are small enough to be regarded as point sources, much like smokestacks. Many of the point areas are much lighter than the surrounding ground on satellite imagery, indicative of salt or gypsum-type compounds vs. the reddish-brown coloration of desiccated river flood plains (alluvial dust).
The black plus marks on this map are dust source areas in Iraq. Many of the red pluses are areas that were active before 2005 but are no longer so. The larger black pluses are additional dust source areas that were located by the Naval Research Laboratory (NRL).
These photographs show how the wetlands of southern Iraq have been restored, eliminating some source areas for blowing dust. However, the most prevalent ones between the Tigris and Euphrates rivers remain.
After an appropriate source, the next key ingredient for dust storm generation is wind from the surface through the depth of the boundary layer that’s strong enough to move and loft dust particles.
The first sand and dust particles to move are those from 0.08 to 1 mm in diameter. This occurs with wind speeds of 10 to 25 knots.
As a rule of thumb, winds at the surface need to be 15 knots or greater to mobilize dust. The table shows the wind speeds required to lift particles in different source environments.
Once a dust storm starts, it can maintain the same intensity even when wind speeds slow to below initiation levels. That’s because the bond between the dust particles and the surface is broken and saltation allows dust to lift.
Lofting of dust typically requires substantial turbulence in the boundary layer. This image shows dust being mobilized during a downslope windstorm on the lee slope of the Sierra Nevada mountains in California.
Laminar flow in the right half of the photograph carries the dust close to the valley floor. Further left, the flow slows down and quickly becomes extremely turbulent. During the transition, the dust is lofted approximately 10,000 feet (3000 m).
Typically, wind shear creates the turbulence and horizontal roll vortices that loft dust up and away from the surface. As a rule of thumb, if the wind at the surface is blowing 15 knots, the wind at 1,000 feet (305 meters) must be about 30 knots to keep the dust particles aloft.
Because vertical motions are required to loft dust particles, it stands to reason that dust storms are favored by an unstable boundary layer. In contrast, stable boundary layers suppress vertical motion and inhibit dust lofting.
With the lack of vegetation in dust-prone regions, the ground can experience extreme daytime heating, which creates an unstable boundary layer. As the amount of heating increases, the unstable layer deepens.
As we’ve seen, it’s not enough to have strong wind; the wind must be sufficiently turbulent to loft dust and must occur in a reasonably unstable environment. Wouldn’t it be nice to have a single parameter that expresses wind speed, turbulence, and stability? We do. It’s called the friction velocity.
In more technical terms, dust mobilization is proportional to the flux of momentum, or stress, into the ground. A friction velocity of 60 centimeters per second is typically required to raise dust.
Friction velocity is computed by many NWP models. This NOGAPS analysis for northwest Africa on 7 January 2003 at 12Z shows surface winds, ground wetness, topography, and friction velocity values greater than 60 cm/s.
Note the high friction velocities plotted in red and magenta across the Sahara, particularly near the west coast.
These parallel the area of blowing dust in this SeaWiFS true color image. Since both are from January, the dust in both cases is probably being lifted by the remnants of frontal boundaries manifested as shear lines across equatorial Africa. (Note: The image is from a year prior to the friction velocity chart but is still relevant.)
The whiter plumes are clearly visible in the center of the satellite image, as is an area of higher friction velocities to their north. The plumes are oriented northeast-southwest and are enhanced by the funneling of winds between two areas of higher terrain to the north and the south of the area.
The remnants of the cold front appear as cloud cover over the Red Sea, with cold air cumulus over the northern part.
Notice how the plume of dust blowing out to sea lines up nicely with the region of high friction velocity on land.
Dry desert air has a wide diurnal temperature difference. Strong radiative cooling leads to rapid heat loss after sunset. This quickly cools the lowest atmosphere, resulting in a surface-based inversion that can have a strong impact on blowing dust.
While a 10-knot wind can raise dust during the day, it may have little impact at night. This effect accounts for much of the diurnal variation in summer shamal dust storms, which we will discuss later.
The formation of a surface-based inversion has little effect on dust that’s already suspended higher in the atmosphere. Furthermore, if winds are sufficiently strong, they will inhibit the formation of an inversion or even remove one that has already formed.
If you’ve heard that dust storms always go away at night, that’s not necessarily true; occasionally they persevere. Dust RGB products enable us to detect dust storms at night, something that was not possible with earlier surface and satellite observations.
If you’re not familiar with RGBs, the acronym stands for Red, Green, Blue processing. The products are made from several spectral channels or channel differences and highlight specific features, such as dust. For more information, see the COMET module Multispectral Satellite Applications: RGB Products Explained.
When you are evaluating the potential for dust lofting, be aware of when the boundary layer has a dry adiabatic lapse rate, for the strongest winds aloft can be brought down to the surface, creating gusty conditions.
Be sure to examine winds at 925 mb (approximately 2,500 feet or about 750 meters above the surface when at sea level) where stronger winds allow more dust to be suspended aloft and persist for longer periods due to turbulent mixing.
This section addresses the fate of suspended dust once it’s been lofted high into the atmosphere. Eventually that dust will settle, although it may travel half way around the globe before doing so. As a forecaster, you need to be concerned about the processes that lead to lower dust concentrations, improved visibility, and reduced hazards. (But you should continue to look for conditions that can lower visibility again.)
On the following pages, we will discuss three processes that remove dust:
- Entrainment in precipitation
Gravity also plays a role, although we will not discuss it.
Life Cycle of a Dust Storm
This animation depicts the life cycle of a typical summer dust storm in Iraq, called a shamal.
The initial dust plume extends in a narrow swath immediately downwind from a relatively small source region. As the wind continues to blow, the plume expands laterally and also continues to move downwind.
Sometime later, the wind starts to diminish, eventually falling below the threshold required to continue raising dust. Although no new dust is being raised, the existing dust remains in suspension. The plume detaches from the source region and continues to move downstream and spread. Eventually the dust concentration diminishes through lateral dispersion and settling.
In the shamal example, the dust dissipated through two processes: dispersion and advection. We’ll start by looking at dispersion.
The fanning of a dust plume as it moves downstream from its source region is caused by dispersion, which is a diluting process. Basically, the more air you mix with a dust plume, the more it dilutes, spreads out, and disperses. This is similar to what you see if you pour dye into a river and watch how the color fades as the water moves downstream. Dispersion processes always act to dilute; plumes never re-concentrate.
This figure shows a highly idealized view of a plume dispersing as it moves downstream from a point source. Note that the concentration is not uniform throughout the plume. The highest concentration remains in the center and falls off away from it.
Dispersion and Turbulence
Dispersion is primarily governed by turbulence, which mixes ambient air with the plume. Any increase in turbulence increases the rate at which the plume disperses.
Three kinds of turbulence act to disperse a plume:
- Mechanical turbulence
- Turbulence caused by shear
- Turbulence caused by buoyancy
Mechanical turbulence is caused by air flowing over rough features, such as hills or buildings.
Turbulence from shear can result from differences in wind speed and/or direction.
Buoyancy turbulence can be caused by something as dramatic as an explosion or as simple as parcels of air rising during the diurnal heating of the surface. Particularly in the latter case, buoyancy is governed by the stability of the atmosphere.
Turbulence acts to disperse dust plumes and keep the dust particles in suspension. Without turbulence, dust generally settles at a rate of 1,000 feet (305 meters) per hour. However, this is highly dependent on environmental conditions. Any turbulence will slow the settlement rate.
Dispersion and Stability
We've seen how unstable conditions favor the lofting of dust and formation of dust storms. Stability also has a strong influence on how dust disperses.
This graphic shows dust plumes dispersing under both stable and unstable conditions.
When the local environment is unstable, how do dust plumes disperse? (Choose the best answer.)
The correct answer is c).
Dust disperses in both directions although the effect is significantly more pronounced for the vertical component.
When the atmosphere is stable, dust disperses: (Choose the best answer.)
The correct answer is a).
A stable atmosphere tends to suppress the vertical dispersion of dust, but horizontal transport is still possible.
Under neutral stability conditions, dust plumes spread: (Choose the best answer.)
The correct answer is c).
When the atmosphere has neutral stability, dust plumes disperse roughly equally in both directions because neither one is favored.
Our initial shamal schematic showed the dust plume detaching from the source area when the winds dropped below the threshold to loft dust. Visibility would be expected to improve substantially in the source area soon after this happened. The dust that was lofted simply moved away from its source. Where does the dust go? Recall that dust storms are typically several thousand feet high and frequently extend up to 15,000 feet (4600 meters), and that wind shear contributes to the turbulence needed to loft dust. Therefore, winds aloft may very well carry dust in a direction that’s different from the wind direction on the ground.
When predicting where a dust plume will travel, you should check the vertical wind profile. As this animation shows, dust that leaves the ground going one direction can rise to a level where it travels in an entirely different direction. Fortunately, dust forecast models can do the hard work for you, accounting for the complex evolution of dust plumes in a three-dimensional framework.
Settling of Dust
Particle size plays an important role in both lifting and settling thresholds. Longer suspension times for smaller particles result in long periods of dust haze in arid areas.
Particles between 10 and 50 micrometers fall at about 1,000 feet (305 meters) per hour. Using that rate, if dust is lifted to 5,000 feet (1500 meters) and the wind ceases, the dust will settle in about 5 hours.
Over how large an area? If winds are 10 knots and there’s little to no vertical motion, the dust will typically settle up to 50 nautical miles downstream from the source. Settling is by particle size, with the largest particles falling out first and the smallest ones falling out last. Therefore, the larger, heavier particles will settle near the source area, with the smaller ones settling farther away.
Most dust particles are hygroscopic, or water-attracting. In fact, they usually form the nucleus of precipitation. Because of this affinity to moisture, precipitation very effectively removes dust from the troposphere.
Dust Source Regions
Dust-Prone Regions from Land Cover Types
Dust storms can only form if there’s an appropriate source region. In this section, we’ll look at where these are located, starting with a global view and then focusing on the Middle East and Southwest Asia.
Drawing on a USGS global database of land cover types, eight land covers are thought to produce dust: low sparse grassland in Mongolia; bare desert equatorward of 60° latitude; sand desert; semi-desert shrubs equatorward of 60° latitude; semi-desert sage; polar and alpine desert; salt playas/sabkhas; and sparse dunes and ridges.
When these land cover types are combined with wetness values, we get a bulk measure of erodibility. The figure shows how the world's deserts dominate the resulting pattern.
Source Regions from TOMS Aerosol Index
Identifying dust-prone regions based on land cover characteristics can be refined by incorporating satellite data. The Total Ozone Mapping Spectrometer Aerosol Index (TOMS AI) provides a near-real-time measurement of aerosols in the atmosphere.
This plot bases the dust productivity of the earth surface on the observed frequency of high aerosol values and results in a much more refined view of global dust source regions. Clearly, the majority of the world’s dust storms arise in relatively few areas, in particular, the Sahara, Middle East, Southwest Asia, China, Mongolia, and Southwestern North America.
Using TOMS AI to identify dust source regions, Prospero et. al., 2001 hypothesized that dust sources are associated with topographical lows and depressions.
This graphic shows the dust sources used in the Air Force Weather Dust Transport Application (DTA) model. The oranges and reds indicate strong dust source areas.
Source Regions from DEP
In 2001, NRL started identifying and locating dust emission areas in Southwest Asia using the satellite-derived NRL Dust Enhancement Product (DEP). DEP’s 1-km resolution allows for the identification of individual plume heads that often measure 10 km or less across.
The MODIS true color image and the NRL DEP image show southern Afghanistan, northwestern Pakistan, and eastern Iran on 20 August 2003. By comparing the images, we see the benefit of DEP for identifying small dust plumes. The one in the white rectangle is barely visible in the true color image while it is readily apparent in the dust enhancement product in shades of pink.
What does the close-up view of the white rectangle in the dust enhancement product indicate about the dust plume? (Choose the best answer.)
The correct answer is b).
The close-up view of this localized dust plume indicates that it originated from many point sources to the north and merged into a single small plume as it dispersed to the south.
Cataloguing the individual point sources in dust enhancement products has led to the development of the NRL high-resolution (1-km) Dust Source Database (DSD).
Here we see the 1-km dust sources plotted in red for the 10°X10° tile covering Iraq. Each red area identifies land that has eroded and produced a dust plume.
This plot shows the NRL 1-km dust sources averaged on an 18-km grid where the grid erodible fraction varies from 0 (non-erodible or non-dust producing) to 1.0 (completely erodible and dust producing). Note the many dust-prone areas in eastern portions of the Arabian Peninsula and the spotty source regions in Iran and Afghanistan.
Middle East/Southwest Asia
Soil Types in the Middle East
Some areas of the Middle East are much more prone to dust storms than others. Why might this be? (Choose all that apply.)
All the responses are correct.
Even in bare desert, sandy areas, such as those found on the Arabian Peninsula, generally do not generate dust storms. It is the areas with silt- and clay-rich soils, most common in Iran and Iraq, that are responsible for most dust storms. In this region, these fine-grained soils are found in areas with dry lake beds and river flood plain deposits. The low-lying regions of the eastern Arabian Peninsula, southern Syria, and western Iraq are particularly prone to dust storm generation because prevailing west/northwesterly winds are unimpeded by higher terrain. An area's potential for dust storm generation is also indicated by its climate, that is, its precipitation patterns, prevailing wind direction and speed, and normal location of low- and high-pressure centers.
Dust Source Climatology
You can frequently identify potential dust source regions with satellite imagery. If you’re forecasting for a new region, you can build a dust storm climatology from archived satellite imagery to establish the most prevalent source areas. This is similar to the TOMS Aerosol Index climatology discussed earlier except that it can be much more precise.
For example, this sequence of images reveals that the same light-colored areas in western Afghanistan repeatedly serve as the source for dust storms.
Once you know the color characteristics of source areas in a given region, you should look for other potential areas with a similar appearance.
Interannual Variations in Dust Source Regions
Periods of extended drought dry out lakes, wetlands, and otherwise productive agricultural land, often resulting in new and expanded dust sources. The opposite occurs with wet winters, when numerous storms, heavy rains, and/or above-average snowfall can flood lakes, rivers, and streams and shut off active dust sources.
For example, Southwest Asia experienced an extended drought from 1998 to 2005. Then in 2005, heavy rain and melting snow led to numerous floods in southern Afghanistan.
This MODIS true color image shows the Sistan Basin, one of the world’s driest basins, as it was on 21 February 2005 before it experienced heavy rains and snow melt.
The false color image from 7 March 2005 shows how much the basin changed. The dark blue indicates clear, deep water, the light blue mud-laden water.
The oval in this MODIS true color image from 12 October 2005 shows Lake Saberi. Notice that it is still filled with muddy, brown water after the long, hot summer.
When the Hamoun Lakes and wetlands are filled with water, the production of dust plumes and storms decreases.
These NRL DEP images of Pakistan and Afghanistan on 2 May 2003 and 12 October 2005 demonstrate the difference between a drought-ravaged basin and one that has experienced a wet period.
Areas of Highest Occurrence
This map shows the areas with the highest occurrence of dust storms around the northern Persian Gulf. Maps like this are compiled over several seasons of observations and are invaluable for helping forecasters anticipate dusty conditions. Note that these areas correspond relatively well with the dust source regions that we looked at previously.
Synoptically Forced Dust Storms
In most areas, we can classify dust storms by the broad meteorological conditions that cause them. In this section, we will examine the most common events that occur in the Middle East. These are dust storms caused by prefrontal and postfrontal winds that primarily occur in winter, and summer dust storms caused by persistent northerlies.
Note that whenever dust is a forecast consideration in your area, you should become familiar with the local atmospheric conditions that lead to strong winds under dry conditions. Each region has its own weather patterns that lead to dust storms.
Prefrontal Dust Storms
Prefrontal dust storms occur across Jordan, Israel, the northern Arabian Peninsula, Iraq, and western Iran as low-pressure areas move across the region. Antecedent factors include a band of winds generated by, and ahead of, the low-pressure area that presses against a stationary high-pressure center in Saudi Arabia or the western slopes of Iran’s Zagros Mountains.
This chart depicts prefrontal winds as a low-pressure area migrates into Iraq. The southeasterly or sharqi winds that blow northward up the Tigris/Euphrates River basin are intensified as low-level flow is funneled between the Zagros Mountains to the east and the pressure gradient to the west. Toward the west, southwesterly or suhaili winds pick up dust from western Arabia and move it northeast in advance of the cold front.
This image shows sharqi prefrontal dust plumes emanating from dry lake beds and fluvial deposits in southeastern Iraq. (Fluvial deposits are associated with rivers and streams.)
The polar jet stream behind the cold front and the subtropical jet stream in front of it often interact dynamically to strengthen the front east of the upper-level trough. The strengthened cold front induces stronger prefrontal winds out ahead of the upper-level trough. In addition, the overlapping of these jet cores and the coupling of secondary circulations in the right rear of the polar jet and the left front of the subtropical jet enhance mid-level upward vertical velocities and increase the lifting force for blowing dust.
Under these conditions, westerly winds mobilize dust and sand across Jordan, Syria, and northwestern Saudi Arabia, transporting it east and northeastward across the Arabian Peninsula, Iraq, and the Persian Gulf countries.
Let’s look at an example. A prefrontal dust storm occurred on 25 March 2003. Winds in front of a powerful Mediterranean cyclone whipped up a thick dust storm that significantly impacted movement on the ground.
This graphic shows a 24-hour prognostic weather map valid on 25 March at 12Z as the dust storm moved across Iraq. We see the convergence of the polar front jet and the subtropical jet, indicated by the red arrows. The shaded areas indicate where dense dust storms with visibilities of less than 1 km were predicted along the associated cold front.
This SeaWiFS true color image taken during the morning hours shows the extent of the unfolding dust event. Areas of prefrontal dust cover northeastern and eastern Saudi Arabia and extend into Iraq where cloud cover makes it difficult to observe its full extent from satellite. In addition, postfrontal dust is forming behind the advancing cold front from Egypt and Sudan northeastward into northern Saudi Arabia and northwestern Iraq.
The upper-level trough moved east into the northern Arabian Peninsula, resulting in fairly cloud-free skies along the front from the Red Sea. From there to the west, the building high pressure to the north resulted in convergence (a shear line) that provided lift for blowing dust and sand across Sudan and equatorial Africa.
Over the Red Sea itself, satellite imagery can be used to help identify the front as the associated low-level convergence and moisture typically lead to the formation of stratiform clouds. This SeaWiFS image taken on the morning of 26 March shows cloud cover along the cold front in the southern Red Sea extending northeast along the remainder of the frontal zone to the main area of low pressure.
Arabian Peninsula Prefrontal Dust Event
Here is another example of a prefrontal dust event. (There's also some post-frontal activity.) The weather chart shows that behind an advancing cold front, southern Iraq and northern Saudi Arabia are experiencing blowing dust or shamal conditions.
At the same time, prefrontal dust plumes south of the low are being lifted from the United Arab Emirates northward across the Persian Gulf due to increasing low-level southerly flow.
Now we'll zoom in on the area in the red box.
This MODIS true color image highlights the type of prefrontal dust conditions that affected large portions of the Middle East, including Iraq, northern Saudi Arabia, and the United Arab Emirates during this event.
North Africa Sirocco Winds and Prefrontal Dust Event
This MODIS true color image shows another case of prefrontal dust over North Africa. It is associated with a frontal system over the Mediterranean, which extends southward into Libya. The prefrontal winds responsible for the blowing dust are known as the sirocco in this region.
Plumes of dust can be seen moving from northwestern Egypt and northeastern Libya across the Mediterranean Sea.
The leading edge of the advancing cold front is indicated by the middle- and high-level cloud band oriented north to south.
The weather chart shows the synoptic features and low-level wind flow associated with the dust outbreak as the low-pressure system over the southern Mediterranean moves eastward.
Sirocco wind events can have speeds of up to 100 km/h (55 knots) and are most common during the fall and spring.
Postfrontal Dust Storms
As you’ve seen, widespread dust can also occur following pre-frontal events. Especially in winter months, cold frontal passage leads to strong northwesterly winds on the backside of the front. The resulting dust storm is referred to as a shamal from the Arabic word for north. Shamals produce the most widespread hazardous weather known to the region.
The RGB animation shows a cold front-generated sandstorm stretching to the west of the Persian Gulf. The front has passed and lies to the south of the dust front. Strong northwesterly postfrontal flow has picked up dust along the front and appears to be moving to the south and east.
Winter shamals generally last for either 24 to 36 hours or three to five days.
The shorter-period shamals typically begin with passage of the front. When the associated upper-level trough or rapidly moving short waves move eastward, winds diminish after 24 to 36 hours. Such cases are relatively common, occurring two to three times a month. Sustained winds typically reach 30 knots, with stronger gusts to 40 knots.
The longer-term (three- to five-day) shamal occurs one to three times a winter and produces the strongest winds and highest seas in the Persian Gulf. Over the exposed gulf waters, sustained wind speeds have reached 50 knots and produced 10- to 13-foot seas. This type of shamal arises either from:
- The temporary stagnation of a 500-mb shortwave over or just east of the Strait of Hormuz, or
- The establishment of a mean longwave trough over the same area
Persistent dust and sand storms occur throughout the life spans of both types of shamals.
Dry conditions enhance dust during the first few cold frontal passages of the season. In fact, widespread dust often occurs with the first passage, restricting visibility to less than three nautical miles. Subsequent fronts bring precipitation that binds soil particles together. In these circumstances, winds above 25 knots are often needed to raise dust.
Here’s a typical synoptic-scale surface chart for a winter frontal event similar to the one seen in the previous satellite image. Strong pressure gradients develop behind a moderate to strong cold front due to upper-level subsidence and rapidly building surface high pressure over northwestern Saudi Arabia and Iraq. The strong northwesterly low-level winds are quickly reinforced by west-to-northwesterly upper-level winds behind the mid-level trough. Shamal winds and postfrontal blowing dust develop behind the cold front over southern Iraq and northeastern Saudi Arabia. Farther to the west, another area of post-frontal blowing dust forms to the north of the surface trough and shearline over southern Egypt.
In this example, a low is centered over Iraq and Kuwait, with a strong pressure gradient to the southwest. As a result, there is strong northwesterly flow to the west of the low, with winds reaching 20 to 25 knots near the Persian Gulf. When this flow is combined with unstable boundary layer stratification in the postfrontal environment, conditions are ripe for a dust storm.
The synoptic chart highlights the major frontal features as the dust event unfolds. Notice that in addition to the areas of postfrontal dust, pre-frontal dust also forms over northeastern Saudi Arabia, southern Iraq, and Kuwait.
Let’s focus on the cold front as it moves southward over the Red Sea. In this MODIS true color image, the dust cloud is readily visible over the water as dust is advected southward by strong northerly postfrontal winds.
A mid-level trough centered over Saudi Arabia and extending northward to the eastern Mediterranean Sea is associated with the surface trough over Iraq, Kuwait, and Saudi Arabia. Strong northwesterly flow exists on the backside of the trough with 80-kt winds being reported over Egypt (in the cyan area).
Deep vertical mixing is helpful in generating dust storms. Strongdownward vertical motion, which is likely to be associated with a middle- to upper-level front within the upper-level trough shown, does two things:
- It helps to prevent cloud formation or evaporate preexisting cloudiness. This increases solar warming in the lower troposphere and enables strong winds to mix into the low levels from above. This requires that middle and lower tropospheric lapse rates be unstable, which is more likely to occur when strong solar heating is present. Although subsidence itself can stabilize lapse rates, even the wintertime Middle Eastern sun can often offset this stabilizing effect.
- Strong winds dynamically accompany the downward intrusion of upper tropospheric air into the mid-levels and near the surface. This momentum and dry air then mix to the surface, leading to clearing conditions and little precipitation on the back side of a trough.
Examples from Other Regions
Excitation of dust storms by frontal passage is not limited to the Middle East. This MODIS true color image shows a massive dust storm that struck Sydney, Australia on 23 October 2002.
In this image, the dust extends 932 miles (1500 km) north-northwest from near the southeastern corner of Australia. The source region for the dust was an enormous dry lake bed in south-central Australia. Eastern Australia had been in the grip of a drought for six months, which made the soil much easier to loft. The red dots mark active bushfires. Smoke plumes clearly show the low-level wind direction ahead of the advancing dust front.
This weather map shows the location of the cold front and surface trough at about the same time as the MODIS image. Notice the close correlation between the frontal location and the dust cloud.
Postfrontal dust storms are also common across the American southwest. This MODIS true color image shows one that originated in northern Mexico and western Texas.
In this case, surface winds were very strong. In this mid-afternoon image, the jet maximum had rounded the base of an upper-level trough and was transporting momentum to the surface from strong winds at middle and upper levels.
Finally, post frontal dust storms also affect large portions of East Asia as dust is lifted by strong winds from the arid regions of northwestern China and Mongolia. The dust is transported across China and the Yellow Sea, often impacting Korea and Japan. During the winter and spring, the westerly jet stream sometimes transports the dust over the North Pacific and as far east as North America.
This NASA SeaWiFS true color image shows an expansive dust storm circulating around low pressure over northeastern China.
The summer shamal is a wind that blows with persistence over Iraq and the Persian Gulf from late May to early July. Compared to winter events, summer dust storms have greater vertical motion due to high temperatures and resultant convective currents.
The summer shamal results from a characteristic synoptic pattern in which the most prominent features are:
- A semi-permanent high-pressure cell extending from the eastern Mediterranean to northern Saudi Arabia
- A low-pressure cell over Afghanistan
- Thermal low pressure associated with the monsoon trough extending into southern Saudi Arabia
As the surface pressure analysis shows, the cyclonic circulation around the region of low pressure combines with the anti-cyclonic circulation around the high-pressure cell to increase the winds over the northern Persian Gulf region. These winds are normally confined from the surface up to 5,000 feet (1500 meters). The shamal is particularly strong at ground level during daytime but weakens at the surface overnight. Shamal events can be quite long-lived, lasting several weeks. These are known as the "40-day shamal."
This Meteosat Second Generation (MSG) infrared image overlaid with surface pressure for 7 August 2004 shows the thermal low over southern Afghanistan and northern Pakistan. Higher surface pressure is located over the eastern Mediterranean, with lower surface pressure in the northern Persian Gulf. This produces strong northerly winds in Afghanistan and Iraq.
This NRL MODIS DEP overlaid with surface winds confirms the presence of strong surface northerly winds, with a mesoscale dust storm taking place over western and southern Afghanistan. Dust plumes are evident in Central Iraq.
Upper-level high pressure is often found over the Saudi Peninsula during the summer shamal. The location of the center of the high can vary, as we see in the MSG infrared image and the 500-mb height composite for 7 August 2004.
For the larger-scale events that we’ve been describing, it’s critical to forecast the following:
- Where the wind will be sufficiently strong to mobilize dust
- Where there’s a sufficiently unstable boundary layer
- Where there’s an appropriate source region to excite a dust storm
In general, NWP models do a very good job of forecasting winds and atmospheric stability due to synoptic-scale weather events. As a forecaster, you need to integrate this information with your knowledge of local conditions to accurately forecast blowing dust.
Dust Storms Caused by Mesoscale Systems
This section examines dust storms generated and influenced by mesoscale forcing, focusing on examples from the Middle East and Southwest Asia.
The range of mesoscale phenomena known to excite dust storms includes downslope winds, gap flow, and convection. We'll start with downslope winds.
This MODIS true color image shows two separate dust storms that occurred along the northern Afghan border on 25 March 2009. One was in the Termez Valley, the other in the northern Herat Province. Notice that the dust plumes of the two storms are orientated in different directions. The wind flow in the Termez Valley is easterly, while there is southwesterly, downslope flow in the northern Herat Province. The difference is due to terrain and its impact on the direction of the wind flow.
A surface low located to the north-northwest near the Aral Sea is causing a southeast-to-northwest surface pressure gradient.
Himalayan Gap Flow and Afghan Dust Storms
This MODIS dust product shows dust storms in Afghanistan and to the south in Iran and Pakistan.
Looking at Afghanistan, we see plumes of dust coming off numerous dry lake beds that lie immediately south of the high terrain of the Hindu Kush. The strong flow is a result of gap flow through mountain passes and down to the lowlands where the lake beds lie. The image also shows plumes of dust coming off Iran and Pakistan and blowing over the Arabian Sea.
These graphics show another view of the Afghan dust event. The image on the left is a MODIS true color image, the one on the right a plot of surface winds from a COAMPS simulation run with 9-km grid spacing. By comparing the dust plumes in the satellite image to the streamlines predicted by the model, you can see that COAMPS does a good job with the circulation. The north-northwest gap flows over Afghanistan are well represented, as are the northeasterly flows over the coast. This suggests that a good mesoscale model can give you a handle on the flow that sets up dust events.
The Afghan dust storms are associated with upper-level lows and highs propagating across central Asia. In particular, these events are associated with high pressure building across Uzbekistan, which gives rise to a very strong pressure gradient across the mountains. The pressure gradient results in ageostrophic gap flow that raises dust storms. This is very different than the synoptic-scale geostrophic flow that gives rise to shamal-type events further to the west, over the Persian Gulf region.
Red Sea Dust Storm
This SeaWiFS true color image shows a dust storm event that occurred around the Red Sea in July 1999.
The large thermal contrast between the interior of Sudan and the Red Sea resulted in a strengthened pressure gradient that helped generate the dust storm. The lower terrain of the Tokar Gap provided a path for the dust to move over the Red Sea.
(The Tokar Gap is a low-elevation break in the mountains that flank the west side of the Red Sea.)
The dense plume of dust entering the Red Sea disperses and casts a pall over the area. The mountains to the east appear to block and turn the winds southeastward.
The Red Sea Convergence Zone also helps to keep dust trapped in the center of the Sea. The zone is formed by air flowing in from the north and south, which creates an area of convergence that traps the transported dust.
Accurately forecasting gap flows generally requires a mesoscale model with several grid cells inside the gap. Since the Tokar Gap is approximately 68 miles (110 kilometers) wide, high-resolution mesoscale models should sufficiently capture the flow.
[Note: For more information on gap winds, see COMET's Gap Winds module.]
Here we see a haboob, which is a dust storm caused by convective downbursts. Haboobs are the true walls of dust and sand that most people think of as strong dust storms. Most of the dust particles range from 10 to 50 micrometers, but larger particles (up to several millimeters in size) can be blown about. The larger particles settle rapidly after the wind subsides, whereas the finer ones settle at about 1,000 feet (305 meters) per hour when the haboob finally dissipates. Other areas clear rapidly as the dust is advected out of the area.
Properties of a Haboob
Winds associated with the gust front of a dry downburst from a convective storm average 35 to 50 knots and can easily excite a dust storm when they encounter an appropriate source area. Haboobs tend to be rather small, on the order of 60 to 90 miles (100 to 150 km). Their average height extends from 5,000 to 8,000 feet (about 1500 to 2500 meters) at the peak of the event. However, heights up 15,000 feet (4500 meters) have been recorded when exacerbated by convergent outflow boundaries. The average haboob tends to be short-lived, about three hours. Visibility usually begins to improve soon after the gust front passes.
Although haboobs can be seen approaching a location from afar, they move in very quickly, typically at about half the velocity of the winds within the storm. So a haboob packing 50-knot winds will move at about 25 knots.
Haboobs in Different Regions
This visible satellite image shows a haboob near the Persian Gulf that's associated with a thunderstorm system to the north. The convection is related to summer conditions, with moist inflow from the Persian Gulf.
These MODIS satellite images show additional gust fronts that probably had surface haboobs associated with them. This first image is over Iraq on 31 Mar 2010 at 0730Z.
This second image is over Iran at the same time.
This infrared loop shows a haboob that occurred in the early morning hours of 01 Aug 2001 in the Western Sahara Desert. Over the next 6 hours, the haboob moved west, eventually reaching the Canary Islands. This shows how long a distance strong haboobs can propagate.
The haboob shown above was associated with a large convective complex over central Australia. It illustrates the clearing behind the gust front due to the subsidence and cooler air accompanying the downburst.
Haboobs are much more difficult to forecast than synoptically forced dust storms and rely largely on nowcasting (determining if the environment is right for haboobs). The following procedures will help you forecast haboobs from both ongoing and collapsing thunderstorms.
Forecasting haboobs from ongoing thunderstorms
- Look for signs of instability aloft. Use the Best Lifted Index (the Most Unstable Lifted Index).
- Look for high environmental relative humidity between 700 and 500 mb and/or high values of simulated radar reflectivity from WRF/COAMPS or actual reflectivity from a nearby EWR radar if it’s available. Also look for steep lapse rates between the surface and approximately 18,000 feet (5 km).
- Find the strongest wind at any level aloft where the wet bulb potential temperature is less than the (surface potential temperature + 39 °F or 4 °C). It’s possible that this wind may be brought to the surface.
- Determine if your forecast area is located in or near a dust source region.
Forecasting haboobs from collapsing thunderstorms
- At what time of day is the thunderstorm occurring? Thunderstorm collapse is most likely after sunset.
- Determine the cloud base height of the thunderstorm. The higher it is (greater than 10,000 feet or 3 km above ground level), the warmer the resultant outflow at the surface due to adiabatic compression, and the weaker the potential haboob. Downdraft acceleration will mitigate the warming issue to a limited extent.
- Check for rapidly warming cloud tops in looped geostationary infrared imagery, which are indicative of thunderstorm collapse.
- Determine if the thunderstorm is occurring over a dust source region.
Inversion Downburst Storms
Inversion downburst storms are windstorms that occur on sloping coastal plains with a strong sea breeze. As the sea breeze intensifies, convergence along the sea breeze front can generate sufficient lift to break a capping inversion. This potential instability results in the downward mixing of cool air aloft, which flows downslope and out over the water. The descending air produces roll vortices and potentially severe local dust storms along the coast. Then the inversion is reestablished and the event dies out.
Inversion downburst storms form in coastal terrain where slopes are at least 20 feet (6 m) per mile, such as those found along the Red Sea and Persian Gulf. They occur when the sea breeze exceeds 15 knots and there's an inversion aloft, but not a particularly strong one. The downburst winds last 15 to 45 minutes and reach speeds of 90% of the gradient flow immediately above the inversion, typically 20 to 25 knots. These storms are limited in size, although they can still reduce visibility to less than one mile depending on local surface soil conditions.
Inversion downburst storms typically lead to a very narrow streamer of dust out over the Persian Gulf. Although they occur on both sides of the Gulf, they are more commonly associated with the eastern side, along the Iranian coast. That's probably because the climatologic synoptic flow favors a stronger sea breeze there. Predicting their location is very difficult, but you should look for places where coastal curvature favors stronger sea breezes or sea breeze convergence. Variations in the strength of the inversion also impact where the event is located. And, like all dust events, they require an appropriate source region.
[For more information on sea breezes, see COMET's Sea Breeze module.]
Dust devils are a common wind phenomenon that occur throughout much of the world. These dust-filled vortices are created by strong surface heating and are generally smaller and less intense than tornados. Their diameters typically range from 10 to 300 feet (3 to 90 m), with an average height of approximately 500 to 1,000 feet (150 to 300 m). Dust devils typically last only a few minutes before dissipating. However, when conditions are optimal, they can persist for an hour or more. Wind speeds in larger dust devils can reach 60 mph or greater.
Dust devils form in areas of strong surface heating. This typically occurs under clear skies and light winds when the sun can warm the air near the ground to temperatures well above those just above the surface layer.
Once the ground heats up enough, a localized pocket of air will quickly rise through the cooler air above it. Hot air rushes in to replace the rising air at the bottom of the developing vortex, intensifying the spinning effect. Once formed, the dust devil is a funnel-like chimney through which hot air moves both upwardly and circularly. If a steady supply of warm, unstable air is available, the dust devil will continue to move across the ground. However, once that supply is depleted or the balance is broken in some other way, the dust devil will break down and dissipate.
Dust devils can vary greatly in size, both in diameter and vertical extent. Notice how aggressive the interaction with the surface can be.
Dust Storm Seasonality and Frequency
Climatologies tell us what happened in the past, which helps us anticipate future events and improve our forecasting. Climatology provides several types of data that help with forecasting the location, seasonality, frequency, and severity of dust storms.
We’ve seen how data from the TOMS Aerosol Index helps us map dust source regions. That same data can help us determine seasonal variations in dust storms.
This animation shows the seasonal variability of dust storms in the dust belt that stretches from western Africa up through the Taklamakan Desert in central Asia. Note the strong seasonal dependence of dust storm frequency. For example, dust storms in this desert show a pronounced peak in May while the maximum values for West African dust storms shift northward from winter to summer.
Dust Storm Frequency and Severity
Climatologies compiled by the Air Force Weather Agency Metsat Applications Branch show the monthly frequency of dust storms. Note how the number of storms in the Gobi Desert spikes in March and April and tapers off from May through July.
When we categorize the dust storms by visibility, the picture becomes clearer. Not only is the highest frequency in the early spring, but the majority of severe dust storms occurs in March and April, more than the rest of the year combined.
This graph of dust storm climatology for Iraq reveals some important information. Dust storms tend to be most frequent in the summer, although severe storms can occur from spring through autumn.
Dust Storm Frequency and Precipitation
If you are forecasting in a region and don't have access to information on the frequency of dust storms, you may be able to infer a climatology by examining other climatologic data such as the frequency of dust events vs. annual precipitation rates (PR).
Obviously, drier, hotter conditions favor more dust storms. Here we see a minimum for precipitation events and a maximum for temperature in central Iraq through the summer months, the dustiest time of the year.
Satellite Detection of Dust
Defining the Problem
Using satellites to detect dust has been difficult historically. A dust cloud that's visible in one time and place can suddenly seem to disappear, only to reappear somewhere or sometime else. Furthermore, dust that's prominent during the day can suddenly seem to disappear at night.
Much of the problem stems from the use of single channel visible and infrared images. While they are beneficial in some situations, a number of issues limit their overall usefulness.
In this section, we will show how satellite detection of dust has dramatically improved through the use of multispectral products, which are increasingly available to forecasters.
Specifically, you will:
- See how visible and infrared images are used to detect dust
- See how animating single channel imagery can improve dust interpretation
- Learn how a scientifically based Aerosol Optical Depth product can be helpful to forecasters
- Learn about the improvements made possible with RGB products
Detecting Dust During Midday
In general, it's easier to detect dust during the day than at night although there are differences in daytime performance based on the time of day. Surface type also has a large impact.
Midday visible images depict dust better over water than desert land surfaces. That's because the dust disappears into the sandy, dusty land background while it contrasts distinctly against water (dark) surfaces.
The reverse occurs with infrared images. Dust clouds that are cooler than the underlying hot surface show up distinctly over land. But when they drift over water, the dust usually disappears against the relatively cool waters.
Let's look at these effects in MODIS imagery. The visible image above shows the dust front moving from north to south over the Red Sea. However, the dust is not nearly as easy to detect over the bright land. Still, we can faintly see some plumes emanating from source regions on the east side of the Red Sea.
The infrared imagery does a poor job of detecting dust fronts over water and a much better job over land where the thermal contrast between the dust and surface is enhanced. Note the prominent dust plumes over land emanating from the source regions.
Next, we'll see what it's like to observe dust during the late afternoon and early morning hours using visible images.
Detecting Dust at Sunset and Sunrise
The rules for interpreting dust with visible imagery are different both before the sun sets and after it rises. If the satellite is looking in the general direction of the sun and the dust, the forward scattering of dust particles heightens the reflection from dust.
Backscattering occurs when the satellite is looking away from the sun. Since less solar energy is being reflected back to the satellite, it reduces our ability to see the dust.
Let's look at an example. This MSG natural color RGB product is derived from visible and other solar wavelengths. It shows the Arabian Peninsula at dawn, when the forward scattering of a large dust cloud reveals an advancing dust plume. It would be difficult to detect the cloud with this RGB product in the middle of the day.
DMSP polar-orbiting satellites pass over a location in the early evening and early morning local time, providing a favorable sun-satellite viewing geometry for observing clouds and dust.
In this morning example, more of the scattered sunlight reaches the satellite because the satellite is looking in the direction of the sun's rays. We can see the dust and surface of the water (sun glint), which are more evident against the darker land surface. Notice the dust front approaching Kuwait. The low sun angle even allows us to see wave structure within it.
The second image was taken at the same time from the geostationary Meteosat 7 satellite, which is located further east. Notice how the dust is much less evident and how dark the water surface is. That's because the satellite is viewing from an angle similar to the sun's, meaning that it is seeing less scattered solar energy.
It's evening now and the same dust front has pushed southward to about 25 degrees north. How would you expect the dust to appear with the DMSP satellite that's now to the east? (Choose the best answer.)
The correct answer is a).
The DMSP satellite is viewing the dust from the east, looking in the direction of the sun. Therefore, it sees more of the energy scattered in the forward direction.
From this discussion, it should be clear that some of the best dust viewing on visible images occurs in the early morning and evening with the DMSP satellite.
Aerosol Optical Depth
Now we'll examine a product that's based on the solar channels from MODIS midday data. The visible image is hard to interpret…
…but the aerosol optical depth (AOD) product shows color-coded optical depth (a strong indicator of dust) over the region.
AOD is a unitless measure of the amount of light that airborne particles, such as dust, smoke, haze, and pollution, prevent from passing through a column of atmosphere. AOD does not translate directly into surface visibility estimates because the location of the dust in the vertical is not known: it could be mostly aloft or near the surface. However, AOD serves as a first-order indicator of how dusty the atmosphere is. It is increasingly being assimilated into numerical dust forecast models and forecasters are using it as a nowcast tool to help quantify dust information near and over deserts.
In this example, we see a gathering dust storm over Syria in shades of orange, indicating an optical depth of up to about 1.5, which would likely impact visibility.
About 24 hours later, the dust storm has spread into the Persian Gulf. There are large values of AOD over Saudi Arabia, Iraq, and Kuwait, and lower values elsewhere.
Animating Satellite Images
We've seen the problems that can arise when trying to detect dust in most visible images over land. Animating daytime images can be a useful solution.
In this daytime animation, the motion of the dust cloud helps us identify its location. The cloud is moving southward from Saudi Arabia, Iraq, and Kuwait into the Persian Gulf. Notice how distinct the dust cloud is against the dark ocean background of the Persian Gulf. Just as a second surge of dust moves in from the northwest, darkness descends.
The corresponding infrared animation starts at about 0Z (0300 LST) and continues until 0Z the next night (0300 LST). Before starting the animation, notice that we can see clouds from a cold front but no dust in the Iraq and Kuwait region. Dust is actually present but it's invisible since it blends into the cool surface at night.
At dawn, solar heating starts to create thermal contrast, enabling us to see the dust flooding into the Persian Gulf region. By midday, the dust contrasts well against the heated Saudi Arabia land mass but poorly against the nearby Persian Gulf. By evening, however, the land cools down, the contrast between the elevated dust and surface decreases, and the dust disappears. Although we cannot see it, copious amounts of dust are still there.
This animation illustrates the inherent limitation in using infrared satellite loops to view dust. Detection diminishes as the temperature contrast between the dust and the background decreases.
This MSG dust RGB animation occurs over the same period but depicts dust at night when visible data are not available and infrared data are not very useful for this purpose.
By partially relying on multispectral channel differencing rather than thermal contrasts, the RGB product lets us detect dust at night, an unprecedented capability for satellite dust products.
The first frame of the animation, at about 0Z (0300 LST), shows a nighttime dust cloud in violet moving through southern Iraq. As you may recall, this feature was not apparent in the infrared animation.
We also see how several dust clouds progress through the 24-hour period, including a new outbreak the next evening.
Comparing Dust Products
Now we'll compare two dust RGBs, one from EUMETSAT, the other from the U.S. Naval Research Lab (NRL). The EUMETSAT dust RGB is based on three infrared channels and is available 24 hours a day. Notice the dust squall over Kuwait and the surrounding countries, which appears as pink or violet. Wind barb and weather symbols have been overlaid, including the symbol for suspended dust (S) in southern Iran and Saudi Arabia.
NRL's MODIS dust RGB is only available during daytime hours and portrays dust in orange or pink. Since it uses both visible and infrared data as inputs, it often reveals more dust over water than the EUMETSAT dust RGB.
In the extreme southern portion of this MSG dust RGB animation over Saudi Arabia, what is causing the dust outbreak in violet? View the dust RGB color scheme. (Choose the best answer.)
The correct answer is d).
Thunderstorms, which appear in deep red, create outflows of cool air and gusty winds, which pick up dust. The dust fronts then move away from the thunderstorms that created them.
What happens to the dust fronts? (Choose the best answer.)
The correct answer is b).
The two dust fronts, one from the north and one from south, collide and create a convergence line, along which fresh convection develops.
Forecasting Dust Storms
The Forecast Process
This section presents a general process for forecasting dust storms that incorporates the wide array of tools currently available to help forecasters predict dust storms. These tools include satellite imagery and RGB products, surface and upper-air observations, NWP models, and a new generation of dust/aerosol models.
The dust forecast process is divided into three parts defined by the forecast lead time:
- Long range, 72 to 180 hours
- Medium range, 24 to 72 hours
- Short range, 0 to 24 hours
We'll describe the process and then apply it to a case. Note that the forecast process refers to U.S. Department of Defense (DoD) models and tools but is general enough to easily adapt to other forecast requirements and data sources.
Before starting to develop a dust forecast, you should be familiar with your area of responsibility and local rules of thumb. In particular, you should know:
- The types and locations of local dust source regions; for example, if there are lake beds, salt flats, or newly developed drought regions
- The types of soil present
- The impact of local terrain on wind speeds
- The wind direction with respect to local dust source regions
- How the winds align from the upper levels down to the surface (vertical wind shear), especially during winter
The use of model forecasts depends on the time range of your forecast. Short-range dust forecasts tend to rely on real-time analyses, while medium- and long-range forecasts rely far more on model output from:
- Mesoscale models such as the DoD's COAMPS, DTA-MM5, and DTA-WRF
- Global-scale models such as the DoD's DTA-GFS, NAAPS, and NOGAPS
Here are some tips to keep in mind when viewing model guidance. When possible:
- Consult different dust products from the same dust model or a different model since each product provides slightly different information
- Animate forecast products to identify mesoscale dust features and their movement, extent, and location
On the following pages, we'll examine the three forecast periods and then apply the process to a case in Southwest Asia. As you go through it, a Notes window will be available for tracking information about each time period. You'll need to refer to the information as you proceed through the case.
Long-Range Forecast Process
The long-range (72 to 180 hr or 3 to 5 day) dust storm forecast process has two steps.
Step 1: Look for large-scale, synoptically driven dust events in the 3 to 7.5-day range in global models, such as DTA-GFS and NAAPS.
Step 2: Look for model-forecast midlatitude troughs that drive pre- and post-frontal dust storms in winter and that can amplify the large-scale wind patterns associated with summer events, such as the northerly winds that create shamals. These large-scale waves are resolved by global NWP models such as GFS and NOGAPS, while the associated dust outbreaks are modeled by the global dust models DTA-GFS and NAAPS.
Medium-Range Forecast Process
For the 24- to 72-hr forecast, use mesoscale dust model output from models such as COAMPS, DTA-MM5, and/or DTA-WRF, and larger-scale dust forecasts from the global DTA-GFS and/or NAAPS models. Guidance from NOGAPS shows the evolution of larger-scale atmospheric features and is helpful for identifying conditions favorable for a blowing dust event.
Here are the steps in the medium-range dust storm forecast process.
Step 1: Examine the following charts from the DTA-WRF, COAMPS, NOGAPS, and/or DTA-GFS models.
- 300-mb height and wind forecast charts to track troughs and jet streaks; briefly examine upper-tropospheric winds to identify the presence of any jet streaks, especially for cool-season dust storms; jet streaks within a pronounced upper-level trough are indicative of an intensifying low-pressure system with stronger surface fronts and associated winds
- 500-mb height and relative vorticity forecasts to identify and track troughs and vorticity maxima
- MSLP and surface wind forecast charts for fronts and potentially strong wind conditions
Step 2: Looking at the forecast soundings from WRF or COAMPS, determine the forecast stability and wind profile at your forecast time of interest.
Step 3: Check the 6-hrly precipitation and 700-mb relative humidity forecast charts to determine where increased moisture and precipitation are anticipated since they decrease the probability of dust lofting.
Step 4: Combine COAMPS forecasts of surface friction velocity, surface winds, and soil wetness from WRF and/or COAMPS with your knowledge of dust source areas to see if the criteria for a potential blowing dust event are met. Recall that friction velocity incorporates atmospheric stability and wind speed into one variable.
Step 5: Examine DTA-WRF and COAMPS forecasts of surface visibility due to dust. Compare them to WRF and COAMPS forecasts of winds through the mixed layer and dust optical depth to help assess changes in geographical extent and intensity with each successive model run.
Step 6: From the model output and your initial analysis, develop a best-guess forecast as to the onset and duration of any dust events in your area of responsibility in the 24- to 72-hr window.
Short-Range Forecast Process
The process for creating short-range (0- to 24-hr) dust forecasts includes the following steps.
Step 1: Analyze the present state of the atmosphere by looking at satellite imagery, upper-air charts, and surface analyses, keeping in mind the location and characteristics of relevant dust source regions.
Step 2: Examine the latest observed and/or forecast soundings from WRF and COAMPS. Note the strength of any inversions (usually during summertime) and determine if they will break due to turbulent mixing and daytime heating that would ripen the environment for a dust outbreak.
A dry adiabatic lapse rate from the surface through a deep mixed layer allows the dust to loft to great heights, especially if winds are from the same direction and increase with height through the layer
- Note that dust storms generally occur in this kind of environment and that the strongest wind speed aloft within the dry adiabatic layer can be brought to the surface
- The height or top of an elevated dust layer can be approximated by determining where the lapse rate becomes less than the dry adiabatic lapse rate
- Dust storms are less likely in a stably stratified boundary layer although narrow plumes of blowing dust are still possible
Step 3: To determine the potential duration and type of dust event, pay special attention to dust lofting in your area of responsibility, local rules-of-thumb about advection, and geographic features such as the location of dust source regions, terrain, vegetation, and water sources. Also note where precipitation has fallen in the past 48 hours and whether it was convective or stratiform.
Step 4: Use satellite dust enhancement products (such as enhanced infrared imagery) and RGB and other multispectral imagery tuned for dust detection. Integrating these products with surface observations can provide information about the current extent and location of existing dust plumes and fronts.
Step 5: Make a best-guess forecast as to the onset, duration, and persistence of any dust events in your area of responsibility in the very short term, using short-range mesoscale model output from DTA-WRF and/or COAMPS as guidance. The global DTA-GFS and NAAPS models can resolve large-scale features that drive smaller-scale dust events in the short term but cannot resolve localized dust features.
The rest of this section examines a dust storm case from Southwest Asia, focusing on the use of model data in the dust forecast process. Since these data play a critical role in the long- to short-range forecast processes, we'll focus on those periods more than the nowcasting stage when real-time observational data are more important.
We will provide a limited set of the data and products normally available in an operational environment. The products presented here highlight salient meteorological features that factor prominently into the making of a good dust storm forecast for the scenario.
Case Study: Long-Range Forecast (21 Feb 2010)
Before looking at the data, take a minute to consider the area’s dust climatology at this time of year.
- Dust storms occur 2 or 3 times a month
- Dust events typically last 24 to 36 hours but are sometimes 3 to 5 days long
- Moderate to strong cold fronts and strong pressure gradients are common, leading to pre- and post-frontal dust storms
Open the Notes window by clicking the link at the top of the page. As you go through each time period, record your findings so you can refer to them later. If you want to keep the file, you’ll need to save it to your computer. (It will not be saved as part of the module.)
Several other resources are also available via the links at the top of each page:
- Various maps of the Middle East
- A summary of the dust storm forecast process
Question & Data
Use the tabs to review the data for this time period, then answer the question below.
Which of the following are evident in the charts? (Choose all that apply.)
The correct answers are a) and b).
According to the models, the upper-level pattern supports the development of surface low pressure and a moderate to strong cold front. This can potentially lead to both:
• A post-frontal shamal over the upper half of the Saudi Peninsula
• A prefrontal event ahead of the cold front over the Saudi Plateau and the southern Persian Gulf region in the 120-hr forecast time frame
The progressive movement of the longwave pattern indicates that this will be a relatively short-lived event. (Remember to record this information in Notes.)
The first indication that a severe dust storm may be on the horizon is seen in the 300-mb and 500-mb GFS forecast charts valid for five days from now, on 26 February 2010.
Both charts show a strong middle- and upper-level trough over the Middle East and Iraq on 26 February, with a 300-mb jet streak exceeding 100 knots in the base of the trough.
If we only consider the large-scale or synoptic forcing (not mesoscale features such as the surface low and associated fronts), the DTA-GFS and NAAPS surface visibility forecasts show a pan-regional dust event impacting north Africa and the Arabian Peninsula on 26 February.
Case Study: Medium-Range Forecast (24 Feb 2010)
Question & Data
It’s three days later, 24 February 2010 at 12Z, and new forecast charts are ready for you to examine. Use the tabs to review the data, then answer the question below.
Based on the model output and your assessment from the long-range period, what is your best guess as to the onset of any dust events in the next 24 to 72 hours? (Choose the best answer.)
The correct answer is b).
The guidance indicates that a potential blowing dust event will begin around 12Z on 26 February.
Forty-eight hours prior to the anticipated onset of the dust event, the NWP models continue to forecast a well-developed, midlatitude trough over the area. In the upper troposphere, the forecasted 300-mb winds show a jet maximum exceeding 100 knots developing over northern Saudi Arabia, the northern Persian Gulf, and western Iran. Recall that the presence of a strong jet streak supports the strengthening of the surface cold front and associated winds, and will increase the potential for blowing dust.
The WRF 45-km, 48-hr, 500-mb chart is similar to the GFS 5-day, 500-mb forecast that we saw in the long-range forecast period.
- WRF 500-mb Hght. & Relative Vorticity 48-hr Fcst.
- GFS 500-mb Hght. & Relative Vorticity 114-hr Fcst.
The WRF 45-km, 700-mb height chart shows that the forecasted trough will extend down the Red Sea and Saudi Peninsula on 26 February. As the green shading indicates, high relative humidities through a deep layer are predicted for most of Syria, Jordan, and Iraq.
Looking at the precipitation forecasts, the WRF also indicates that favorable dynamics and moisture ahead of the trough may lead to convection and rainfall across that region. Any significant precipitation would suppress the lofting of dust.
The surface temperature forecast shows that early on 26 February, the cold front extends from northern Iraq across the Red Sea and Arabian Peninsula and into Egypt.
Several features are noteworthy in the COAMPS plot of forecasted surface friction velocity, streamlines, and ground wetness (see below). (Recall that friction velocity incorporates atmospheric stability and wind speed.) High friction velocities (the shaded areas) are forecast north and south of the Iraqi border, Kuwait, over the Saudi Plateau, and coastal and inland areas of the United Arab Emirates, indicating that dust mobilization is likely in these areas. The streamline forecast shows southerly flow over the southeastern portion of the Saudi peninsula and a surface low in western Iraq.
COAMPS plots of forecast surface visibility, dust surface concentration, and dust optical depth support the idea that conditions will become favorable for a widespread blowing dust event by 12Z on 26 February for these areas.
Comparing the 48-hr and earlier 72-hr forecast plots, we see that the COAMPS model continues to refine both the intensity and areal extent of the anticipated dust event.
There's an increasing probability that blowing dust will become more intense and that surface visibilities will reduce over southern Iraq, Kuwait, and adjacent regions of Saudi Arabia.
Case Study: Long-Range Forecast (21 Feb 2010)
Question & Data
It's 12Z 25 February 2010 and time to check the new data. Examine the Meteosat-7 visible, infrared, and water vapor imagery as well as the charts and surface analyses, noting the positions and progression of middle- and upper-level troughs, wind maxima, and surface features, including fronts and pressure gradients. Then answer the question below.
Based on short-range mesoscale model output, what is your revised best guess as to the onset and duration of any dust events? (Choose the best answer from each group, then click Done.)
The correct answers are b) and g).
The guidance indicates that a potential blowing dust event will begin in the 6Z to 12Z timeframe on 26 February. Given the speed at which the shortwave pattern, low pressure system, and fronts are progressing, it's likely that the event will last up to 12 or possibly 24 hours.
On 25 February (24 hours out), the COAMPS and DTA-WRF mesoscale models are predicting a widespread dust event for East Africa and the Saudi Peninsula. The blue oval over the Red Sea shows the location of a dust front forecasted by both dust models at 12Z 26 February. Notice how the DTA-WRF dust surface concentration makes it easier to see the edge of the predicted dust front over the Red Sea than the DTA-WRF visibility chart. The blue oval over the southern Persian Gulf shows the mobilization of dust from the interior Arabian Peninsula and coastal UAE. Finally, reduced visibilities between 0.5 and 5 miles at 100 m above the surface are forecast for this area and over the waters of the Southern Persian Gulf.
The COAMPS surface friction velocity and GFS 700-mb relative humidity charts indicate that Jordan, Syria, and most of Iraq will not experience low visibilities due to dust storms.
Furthermore, the 24-hr WRF precipitation forecast shows that portions of these countries and western Iran are likely to experience widespread shower activity early on 26 February.
On the other hand, both COAMPS and DTA-WRF are forecasting low visibilities north and south of the southern Iraqi border and over Kuwait as seen in the blue oval over this region. The models differ with respect to dust activity over the Saudi Plateau (the orange boxes). COAMPS is predicting widespread dust mobilization while DTA-WRF shows little, if any, dust activity (in effect, good visibility conditions). In the next section, we'll discuss why dust model forecasts like these can differ.
Dust Event Onset (26 Feburary 2010)
It’s 6Z on 26 February. We’ve been anticipating the arrival of a significant dust event within the 6Z to 12Z timeframe. The surface analysis valid at that time shows an elongated surface low stretching from the eastern Mediterranean Sea east and south into southern Iraq and the northeastern Arabian Peninsula. A strong surface cold front extends from southern Iraq southwest across the Arabian plateau. As we saw during previous forecast periods, blowing dust conditions were anticipated in advance of the front as well as behind it where surface and atmospheric conditions (winds, instability, and no significant precipitation) were favorable for dust lofting.
What tools other than ground-based observations and visible and infrared satellite imagery would help you monitor the onset and development of blowing dust? (Choose the best answer.)
The correct answer is c), explained below.
Model guidance is more suitable for information on the onset and duration of an event. Ground-based weather radar can see higher concentrations of blowing dust but has a limited range and can only observe levels near the surface at short range. Both geostationary and polar-orbiting color composite images (RGBs) are excellent tools for monitoring blowing dust under clear sky conditions. Unlike single channel imagery, they have the sensitivity of multiple channels and are typically tuned to highlight dust compared to clouds and other phenomena that may also be present.
For example, the MODIS dust product on 26 February 2010 shows the anticipated large-scale dust event impacting north Africa and the Saudi Peninsula. The white ovals show a dust front over the Red Sea and dust plumes streaming out of the UAE into the southern Persian Gulf. Within the red circle, the clouds exhibit a pinwheel pattern that qualitatively confirms the high relative humidities and cyclonic streamline forecasts. The pink areas indicate that dust is being mobilized and entrained into the low along the Iraqi/Saudi border, as was forecast by both dust models.
In the MSG dust RGB, elevated dust is more difficult to identify over Kuwait due to the presence of low- and mid-level clouds. But the image confirms dust activity over the Saudi Plateau as was forecast by COAMPS.
The following MSG dust RGB animation lets us see when various dust sources become active and monitor the evolution of dust plumes as dust is transported downwind of its source region. In the 0Z to 12Z animation, we see that the event begins in earnest over southern Iraq and the Arabian peninsula within the 6Z to 9Z timeframe (the magenta area).
Later, in the 12Z to 00Z loop, we see the dust event unfolding across the region on either side of the advancing cold front and around the circulation of the surface low as it intensifies and moves eastward from southern Iraq into the northern Persian Gulf region.
At 12Z, surface observations report dust storms ($) and suspended dust (S) over eastern Africa, the western and eastern shores of the Red Sea, southern Iraq, northern and central Saudi Arabia, and the United Arab Emirates. There are no reports of dust storms ($) and suspended dust (S) over Syria, Jordan, or northern Iraq. Reports of precipitation (the green symbols) are seen in Syria, Iraq, and western Iran.
Surface visibilities from reporting stations on this 12Z METAR chart confirm blinding conditions due to blowing dust for the following areas:
- Along the Iraqi and Saudi border with visibilities of less than 0.25 miles (white circle)
- Over Kuwait with visibilities of 0.5 to 1 miles (red circle)
- Over the Saudi plateau with visibilities of 0.5 to 1 miles (red circles)
Why Dust Model Forecasts Differ
Identifying Dust Sources
As you've seen, the process of forecasting dust storms and surface visibility depends largely on model forecasts, which can differ widely. In this section, we'll examine the main factors that account for these differences. These include the models' processes for identifying dust sources, their dust transport dynamics, and their dust removal processes.
The most critical factors in differentiating dust model forecasts are how dust sources are identified and the resolution of the data. Some models get their dust source information only from satellites…
…while others use a combination of satellite, topographic, and land surface data, station data, atlases, and soil samples.
- COAMPS gets its dust sources from 1-km dust enhancement product imagery, atlases, and maps
- DTA WRF, MM5, and GFS dust models locate dust source regions from satellite data, topography, and/or soil moisture, which can vary in precision from 15 to 55 km
- NAAPS uses a combination of satellite data and land surface information to identify dust sources on a global grid with a resolution of 1-degree latitude and longitude
The important thing is how a model determines the number and extent of dust sources in each grid box. If a model 'thinks' that many dust sources cover a significant portion of a grid box, it may predict large, broad plumes for the area. Another model may not show any dust sources for the box. If that's not correct, it may reflect a weakness in how the model determines erodible or dust-producing areas.
The models used in the 26 February case have varied dust source functions. Some forecast broad plumes, others narrow plumes. Some have too many dust sources, others too few.
For example, COAMPS forecasted several refined plumes for the interior of the UAE and its coast...
… while DTA-WRF forecasted a broad dust plume with low visibilities (1 to 0.5 miles or 1.6 to 0.8 km) from Qatar to the Strait of Hormuz. The COAMPS forecast was based on a few, limited dust sources, whereas DTA-WRF had too many.
Over the Saudi Plateau, COAMPS over-predicted surface dust concentrations, leading to a broad area with visibilities from 2 to 0.5 miles (3.2 to 0.8 km). In contrast, DTA-WRF did not forecast any reduced visibility plumes. This suggests that COAMPS had too many dust sources for this area, while DTA-WRF had too few.
Model Dynamics & Dust Removal Processes
NWP models produce different dynamical forecasts due to their sensitivity to initial conditions. This can lead to different atmospheric motions and stability, which can create variations in the strength and location of upper-level short waves, surface lows, associated fronts, and surface winds. Differences in the forecasted strength and location of surface winds account for different dust visibility forecasts among models.
Finally, a model's handling of soil moisture and precipitation impacts its treatment of dust production and removal. Source areas with significant rainfall in previous time steps will have high soil moisture values and suppress dust production in current forecasts. Since rain removes suspended dust particles, differences in forecasted precipitation patterns lead to different visibility forecasts.
You’ve reached the end of the module. Having a process for forecasting dust storms and a better understanding of why model forecasts can differ should make you feel more comfortable about forecasting for dust-prone areas.
Read through the summary, then complete the module quiz and module survey.
Visibility: Intense dust storms reduce visibility to near zero in and near source regions, with visibility improving away from the source.
Dust moves through saltation (small particles jump and skip and are lifted into the air), creep (sediment rolls and slides along the ground) and sedimentation (dust is lifted into the air and held aloft by winds).
Sources of dust: Deserts, agricultural area, coastal areas, river flood plains, ocean sediments, glacial sediments, and dry lake beds; most dust comes from discrete areas (point sources)
Dust storms requirements: An appropriate source of dust, sufficient wind and turbulence, and an unstable atmosphere
Processes that remove dust: Dispersion, advection, entrainment in precipitation, gravity
Prefrontal dust storms: A band of winds generated by and ahead of a low-pressure area that presses against, for example, a stationary high-pressure center or mountains
Post-frontal dust storms: Widespread dust following pre-frontal events; a shamal is a dust storm resulting from strong northwesterly winds on the backside of a cold front
Mesoscale phenomena that cause dust storms: Downslope winds, gap flow, and convection (haboob: a dust storm caused by convective downbursts)
Climatology provides data that help forecast the location, seasonality, frequency, and severity of dust storms
Satellite detection of dust has dramatically improved through the use of multispectral products, such as dust RGBs
Aerosol Optical Depth: Measures the light that airborne particles prevent from passing through the atmosphere; does not translate directly into surface visibility estimates but serves as a first-order indicator of how dusty the atmosphere is
Dust Forecast Process:Long range (72 to 180 hr):
- Look for large-scale, synoptically driven dust events in the 3 to 7.5-day range in global models, such as DTA-GFS and NAAPS.
- Look for model-forecast midlatitude troughs that drive pre- and post-frontal dust storms in winter and that can amplify the large-scale wind patterns associated with summer events, such as the northerly winds that create shamals. These large-scale waves are resolved by global NWP models such as GFS and NOGAPS, while the associated dust outbreaks are modeled by the global dust models, DTA-GFS and NAAPS.
Medium-range (24- to 72-hr):
- Examine the following: 300-mb height and wind forecast charts to track troughs and jet streaks; briefly examine upper-tropospheric winds to identify the presence of any jet streaks, especially for cool-season dust storms. Jet streaks within a pronounced upper-level trough are indicative of an intensifying low-pressure system with stronger surface fronts and associated winds; 500-mb height and relative forecasts to identify and track troughs and vorticity maxima; and MSLP and surface wind forecast charts for fronts and potentially strong wind conditions
- Looking at the forecast soundings from WRF or COAMPS, determine the forecast stability and wind profile at your forecast time of interest.
- Check the 6-hrly precipitation and 700-mb relative humidity forecast charts to determine where increased moisture and precipitation are anticipated since they decrease the probability of dust lofting.
- Combine COAMPS forecasts of surface friction velocity, surface winds, and soil wetness from WRF and/or COAMPS with your knowledge of dust source areas to see if the criteria for a potential blowing dust event are met. Recall that friction velocity incorporates atmospheric stability and wind speed into one variable.
- Examine DTA-WRF and COAMPS forecasts of surface visibility due to dust. Compare them to WRF and COAMPS forecasts of winds through the mixed layer and dust optical depth to help assess changes in geographical extent and intensity with each successive model run.
- From the model output and your initial analysis, develop a best-guess forecast as to the onset and duration of any dust events in your area of responsibility in the 24- to 72-hr window.
Short-range (0- to 24-hr):
- Analyze the present state of the atmosphere by looking at satellite imagery, upper-air charts, and surface analyses, keeping in mind the location and characteristics of relevant dust source regions.
- Examine the latest observed and/or forecast soundings from WRF and COAMPS. Note the strength of any inversions (usually during summertime) and determine if they will break due to turbulent mixing and daytime heating that would ripen the environment for a dust outbreak.
- To determine the potential duration and type of dust event, pay special attention to dust lofting in your area of responsibility, local rules-of-thumb about advection, and geographic features such as the location of dust source regions, terrain, vegetation, and water sources. Also note where precipitation has fallen in the past 48 hours and whether it was convective or stratiform.
- Use satellite dust enhancement products (such as enhanced infrared imagery) and RGB and other multispectral imagery tuned for dust detection. Integrating these products with surface observations can provide information about the current extent and location of existing dust plumes and fronts.
- Make a best-guess forecast as to the onset, duration, and persistence of any dust events in your area of responsibility in the very short term, using short-range mesoscale model output from DTA-WRF and/or COAMPS as guidance. The global DTA-GFS and NAAPS models can resolve large-scale features that drive smaller-scale dust events in the short term but cannot resolve localized dust features. | http://www.goes-r.gov/users/comet/mesoprim/dust/print.htm | 13 |
52 | Functions, Domain, and Range Study Guide
Introduction to Functions, Domain, and Range
Mathematics is an escape from reality
—Stanislaw Ulam (1909–1984) Polish Mathematician
In this lesson, you'll learn how to determine if an equation is a function, and how to find the domain and range of a function.
Almost every line we have seen in the last few lessons has been a function. An equation is a function if every x value has no more than one y value. For instance, the equation y = 2x is a function, because there is no value of x that could result in two different y values. When an equation is a function, we can replace y with f(x), which is read as"f of x." If you see an equation written as f(x) = 2x, you are being told that y is a function of x, and that the equation is a function.
The equation x = 5 is not a function. When x is 5, y has many different values. Vertical lines are not functions. They are the only type of line that is not a function.
What about the equation y = x2 Positive and negative values of x result in the same y value, but that is just fine. A function can have y values that each have more than one x value, but a function cannot have x values that each have more than one y value. There is no number that can be substituted for x that results in two different y values, so y = x2 is a function
What about the equation y2 = x? In this case, two different y values, such as 2 and –2, result in the same x value, 4, so y2 = x is not a function. We must always be careful with equations that have even exponents. If we take the square root of both sides of the equation y2 = x, we get y = √x, which is a function. x cannot be negative, because we cannot find the square root of a negative number. Because x must be positive, y must be positive. This means that, unlike y2 = x, there is no x value having two y values.
When you are trying to decide whether an equation is a function, always ask: Is there a value of x having two y values? If so, the equation is not a function.
Vertical Line Test
When we can see the graph of an equation, we can easily identify whether the equation is a function by using the vertical line test. If a vertical line can be drawn anywhere through the graph of an equation, such that the line crosses the graph more than once, then the equation is not a function. Why? Because a vertical line represents a single x value, and if a vertical line crosses a graph more than once, then there is more than one y value for that x value.
Look at the following graph. We do not know what equation is shown, but we know that it is a function, because there is no place on the graph where a vertical line will cross the graph more than once.
Even if there are many places on a graph that pass the vertical line test, if there is even one point for which the vertical line test fails, then the equation is not a function. The following graph, a circle, is not a function, because there are many x values that have two y values. The dark line drawn where x = 5 shows that the graph fails the vertical line test. The line crosses the circle in two places.
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
ACTIVITIESGet Outside! 10 Playful Activities
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Bullying in Schools
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Should Your Child Be Held Back a Grade? Know Your Rights
- First Grade Sight Words List | http://www.education.com/study-help/article/functions-domain-range/ | 13 |
141 | A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations. As long as most memory accesses are cached memory locations, the average latency of memory accesses will be closer to the cache latency than to the latency of main memory.
When the processor needs to read from or write to a location in main memory, it first checks whether a copy of that data is in the cache. If so, the processor immediately reads from or writes to the cache, which is much faster than reading from or writing to main memory.
Most modern desktop and server CPUs have at least three independent caches: an instruction cache to speed up executable instruction fetch, a data cache to speed up data fetch and store, and a translation lookaside buffer (TLB) used to speed up virtual-to-physical address translation for both executable instructions and data. The data cache is usually organized as a hierarchy of more cache levels (L1, L2, etc.; see Multi-level caches).
Cache entries
Data is transferred between memory and cache in blocks of fixed size, called cache lines. When a cache line is copied from memory into the cache, a cache entry is created. The cache entry will include the copied data as well as the requested memory location (now called a tag).
When the processor needs to read or write a location in main memory, it first checks for a corresponding entry in the cache. The cache checks for the contents of the requested memory location in any cache lines that might contain that address. If the processor finds that the memory location is in the cache, a cache hit has occurred. However, if the processor does not find the memory location in the cache, a cache miss has occurred. In the case of:
- a cache hit, the processor immediately reads or writes the data in the cache line.
- a cache miss, the cache allocates a new entry, and copies in data from main memory. Then, the request is fulfilled from the contents of the cache.
Cache performance
The proportion of accesses that result in a cache hit is known as the hit rate, and can be a measure of the effectiveness of the cache for a given program or algorithm.
Read misses delay execution because they require data to be transferred from memory much more slowly than the cache itself. Write misses may occur without such penalty, since the processor can continue execution while data is copied to main memory in the background.
Instruction caches are similar to data caches, but the CPU only performs read accesses (instruction fetches) to the instruction cache. (With Harvard architecture and modified Harvard architecture CPUs, instruction and data caches can be separated for higher performance, but they can also be combined to reduce the hardware overhead.)
Replacement policies
In order to make room for the new entry on a cache miss, the cache may have to evict one of the existing entries. The heuristic that it uses to choose the entry to evict is called the replacement policy. The fundamental problem with any replacement policy is that it must predict which existing cache entry is least likely to be used in the future. Predicting the future is difficult, so there is no perfect way to choose among the variety of replacement policies available.
One popular replacement policy, least-recently used (LRU), replaces the least recently accessed entry.
Marking some memory ranges as non-cacheable can improve performance, by avoiding caching of memory regions that are rarely re-accessed. This avoids the overhead of loading something into the cache, without having any reuse.
- Cache entries may also be disabled or locked depending on the context.
Write policies
If data is written to the cache, at some point it must also be written to main memory. The timing of this write is known as the write policy.
- In a write-through cache, every write to the cache causes a write to main memory.
- Alternatively, in a write-back or copy-back cache, writes are not immediately mirrored to the main memory. Instead, the cache tracks which locations have been written over (these locations are marked dirty). The data in these locations are written back to the main memory only when that data is evicted from the cache. For this reason, a read miss in a write-back cache may sometimes require two memory accesses to service: one to first write the dirty location to memory and then another to read the new location from memory.
There are intermediate policies as well. The cache may be write-through, but the writes may be held in a store data queue temporarily, usually so that multiple stores can be processed together (which can reduce bus turnarounds and improve bus utilization).
The data in main memory being cached may be changed by other entities (e.g. peripherals using direct memory access or multi-core processor), in which case the copy in the cache may become out-of-date or stale. Alternatively, when the CPU in a multi-core processor updates the data in the cache, copies of data in caches associated with other cores will become stale. Communication protocols between the cache managers which keep the data consistent are known as cache coherence protocols.
CPU stalls
The time taken to fetch one cache line from memory (read latency) matters because the CPU will run out of things to do while waiting for the cache line. When a CPU reaches this state, it is called a stall.
As CPUs become faster, stalls due to cache misses displace more potential computation; modern CPUs can execute hundreds of instructions in the time taken to fetch a single cache line from main memory. Various techniques have been employed to keep the CPU busy during this time.
- Out-of-order CPUs (Pentium Pro and later Intel designs, for example) attempt to execute independent instructions after the instruction that is waiting for the cache miss data.
- Another technology, used by many processors, is simultaneous multithreading (SMT), or — in Intel's terminology — hyper-threading (HT), which allows an alternate thread to use the CPU core while a first thread waits for data to come from main memory.
Cache entry structure
Cache row entries usually have the following structure:
|tag||data block||flag bits|
The data block (cache line) contains the actual data fetched from the main memory. The tag contains (part of) the address of the actual data fetched from the main memory. The flag bits are discussed below.
The "size" of the cache is the amount of main memory data it can hold. This size can be calculated as the number of bytes stored in each data block times the number of blocks stored in the cache. (The number of tag and flag bits is irrelevant to this calculation, although it does affect the physical area of a cache).
The index describes which cache row (which cache line) that the data has been put in. The index length is bits. The block offset specifies the desired data within the stored data block within the cache row. Typically the effective address is in bytes, so the block offset length is bits. The tag contains the most significant bits of the address, which are checked against the current row (the row has been retrieved by index) to see if it is the one we need or another, irrelevant memory location that happened to have the same index bits as the one we want. The tag length in bits is .
The original Pentium 4 had a 4-way set associative L1 data cache of size 8 kB with 64 byte cache blocks. Hence, there are 8 kB/64 = 128 cache blocks. If it's 4-way set associative, this implies 128/4=32 sets (and hence 2^5 = 32 different indices). There are 64=2^6 possible offsets. Since the CPU address is 32 bits, this implies 32=21+5+6, and hence 21 bits of tag field. The original Pentium 4 also had an 8-way set associative L2 integrated cache of size 256 kB with 128 byte cache blocks. This implies 32 = 17 + 8 + 7, and hence 17 bits of tag field.
Flag bits
An instruction cache requires only one flag bit per cache row entry: a valid bit. The valid bit indicates whether or not a cache block has been loaded with valid data.
On power-up, the hardware sets all the valid bits in all the caches to "invalid". Some systems also set a valid bit to "invalid" at other times—such as when multi-master bus snooping hardware in the cache of one processor hears an address broadcast from some other processor, and realizes that certain data blocks in the local cache are now stale and should be marked invalid.
A data cache typically requires two flag bits per cache row entry: a valid bit and also a dirty bit. The dirty bit indicates whether that block has been unchanged since it was read from main memory -- "clean"—or whether the processor has written data to that block (and the new value has not yet made it all the way to main memory) -- "dirty".
The replacement policy decides where in the cache a copy of a particular entry of main memory will go. If the replacement policy is free to choose any entry in the cache to hold the copy, the cache is called fully associative. At the other extreme, if each entry in main memory can go in just one place in the cache, the cache is direct mapped. Many caches implement a compromise in which each entry in main memory can go to any one of N places in the cache, and are described as N-way set associative. For example, the level-1 data cache in an AMD Athlon is 2-way set associative, which means that any particular location in main memory can be cached in either of 2 locations in the level-1 data cache.
Associativity is a trade-off. If there are ten places to which the replacement policy could have mapped a memory location, then to check if that location is in the cache, ten cache entries must be searched. Checking more places takes more power, chip area, and potentially time. On the other hand, caches with more associativity suffer fewer misses (see conflict misses, below), so that the CPU wastes less time reading from the slow main memory. The rule of thumb is that doubling the associativity, from direct mapped to 2-way, or from 2-way to 4-way, has about the same effect on hit rate as doubling the cache size. Associativity increases beyond 4-way have much less effect on the hit rate, and are generally done for other reasons (see virtual aliasing, below).
In order of worse but simple to better but complex:
- direct mapped cache — The best (fastest) hit times, and so the best tradeoff for "large" caches
- 2-way set associative cache
- 2-way skewed associative cache – In 1993, this was the best tradeoff for caches whose sizes were in the range 4K-8K bytes.
- 4-way set associative cache
- fully associative cache – the best (lowest) miss rates, and so the best tradeoff when the miss penalty is very high
Direct-mapped cache
Here each location in main memory can only go in one entry in the cache. It doesn't have a replacement policy as such, since there is no choice of which cache entry's contents to evict. This means that if two locations map to the same entry, they may continually knock each other out. Although simpler, a direct-mapped cache needs to be much larger than an associative one to give comparable performance, and is more unpredictable. Let 'x' be block number in cache, 'y' be block number of memory, and 'n' be number of blocks in cache, then mapping is done with the help of the equation x=y mod n.
2-way set associative cache
If each location in main memory can be cached in either of two locations in the cache, one logical question is: which one of the two? The simplest and most commonly used scheme, shown in the right-hand diagram above, is to use the least significant bits of the memory location's index as the index for the cache memory, and to have two entries for each index. One benefit of this scheme is that the tags stored in the cache do not have to include that part of the main memory address which is implied by the cache memory's index. Since the cache tags have fewer bits, they take less area on the microprocessor chip and can be read and compared faster. Also LRU is especially simple since only one bit needs to be stored for each pair.
Speculative execution
One of the advantages of a direct mapped cache is that it allows simple and fast speculation. Once the address has been computed, the one cache index which might have a copy of that location in memory is known. That cache entry can be read, and the processor can continue to work with that data before it finishes checking that the tag actually matches the requested address.
The idea of having the processor use the cached data before the tag match completes can be applied to associative caches as well. A subset of the tag, called a hint, can be used to pick just one of the possible cache entries mapping to the requested address. The entry selected by the hint can then be used in parallel with checking the full tag. The hint technique works best when used in the context of address translation, as explained below.
2-way skewed associative cache
Other schemes have been suggested, such as the skewed cache, where the index for way 0 is direct, as above, but the index for way 1 is formed with a hash function. A good hash function has the property that addresses which conflict with the direct mapping tend not to conflict when mapped with the hash function, and so it is less likely that a program will suffer from an unexpectedly large number of conflict misses due to a pathological access pattern. The downside is extra latency from computing the hash function. Additionally, when it comes time to load a new line and evict an old line, it may be difficult to determine which existing line was least recently used, because the new line conflicts with data at different indexes in each way; LRU tracking for non-skewed caches is usually done on a per-set basis. Nevertheless, skewed-associative caches have major advantages over conventional set-associative ones.
Pseudo-associative cache
A true set-associative cache tests all the possible ways simultaneously, using something like a content addressable memory. A pseudo-associative cache tests each possible way one at a time. A hash-rehash cache and a column-associative cache are examples of pseudo-associative cache.
In the common case of finding a hit in the first way tested, a pseudo-associative cache is as fast as a direct-mapped cache. But it has a much lower conflict miss rate than a direct-mapped cache, closer to the miss rate of a fully associative cache.
Cache miss
A cache miss refers to a failed attempt to read or write a piece of data in the cache, which results in a main memory access with much longer latency. There are three kinds of cache misses: instruction read miss, data read miss, and data write miss.
A cache read miss from an instruction cache generally causes the most delay, because the processor, or at least the thread of execution, has to wait (stall) until the instruction is fetched from main memory.
A cache read miss from a data cache usually causes less delay, because instructions not dependent on the cache read can be issued and continue execution until the data is returned from main memory, and the dependent instructions can resume execution.
A cache write miss to a data cache generally causes the least delay, because the write can be queued and there are few limitations on the execution of subsequent instructions. The processor can continue until the queue is full.
In order to lower cache miss rate, a great deal of analysis has been done on cache behavior in an attempt to find the best combination of size, associativity, block size, and so on. Sequences of memory references performed by benchmark programs are saved as address traces. Subsequent analyses simulate many different possible cache designs on these long address traces. Making sense of how the many variables affect the cache hit rate can be quite confusing. One significant contribution to this analysis was made by Mark Hill, who separated misses into three categories (known as the Three Cs):
- Compulsory misses are those misses caused by the first reference to a location in memory. Cache size and associativity make no difference to the number of compulsory misses. Prefetching can help here, as can larger cache block sizes (which are a form of prefetching). Compulsory misses are sometimes referred to as cold misses.
- Capacity misses are those misses that occur regardless of associativity or block size, solely due to the finite size of the cache. The curve of capacity miss rate versus cache size gives some measure of the temporal locality of a particular reference stream. Note that there is no useful notion of a cache being "full" or "empty" or "near capacity": CPU caches almost always have nearly every line filled with a copy of some line in main memory, and nearly every allocation of a new line requires the eviction of an old line.
- Conflict misses are those misses that could have been avoided, had the cache not evicted an entry earlier. Conflict misses can be further broken down into mapping misses, that are unavoidable given a particular amount of associativity, and replacement misses, which are due to the particular victim choice of the replacement policy.
The graph to the right summarizes the cache performance seen on the Integer portion of the SPEC CPU2000 benchmarks, as collected by Hill and Cantin. These benchmarks are intended to represent the kind of workload that an engineering workstation computer might see on any given day. The reader should keep in mind that finding benchmarks which are even usefully representative of many programs has been very difficult, and there will always be important programs with very different behavior than what is shown here.
We can see the different effects of the three Cs in this graph.
At the far right, with cache size labelled "Inf", we have the compulsory misses. If we wish to improve a machine's performance on SpecInt2000, increasing the cache size beyond 1 MB is essentially futile. That's the insight given by the compulsory misses.
The fully associative cache miss rate here is almost representative of the capacity miss rate. The difference is that the data presented is from simulations assuming an LRU replacement policy. Showing the capacity miss rate would require a perfect replacement policy, i.e. an oracle that looks into the future to find a cache entry which is actually not going to be hit.
Note that our approximation of the capacity miss rate falls steeply between 32 kB and 64 kB. This indicates that the benchmark has a working set of roughly 64 kB. A CPU cache designer examining this benchmark will have a strong incentive to set the cache size to 64 kB rather than 32 kB. Note that, on this benchmark, no amount of associativity can make a 32 kB cache perform as well as a 64 kB 4-way, or even a direct-mapped 128 kB cache.
Finally, note that between 64 kB and 1 MB there is a large difference between direct-mapped and fully associative caches. This difference is the conflict miss rate. The insight from looking at conflict miss rates is that secondary caches benefit a great deal from high associativity.
This benefit was well known in the late 1980s and early 1990s, when CPU designers could not fit large caches on-chip, and could not get sufficient bandwidth to either the cache data memory or cache tag memory to implement high associativity in off-chip caches. Desperate hacks were attempted: the MIPS R8000 used expensive off-chip dedicated tag SRAMs, which had embedded tag comparators and large drivers on the match lines, in order to implement a 4 MB 4-way associative cache. The MIPS R10000 used ordinary SRAM chips for the tags. Tag access for both ways took two cycles. To reduce latency, the R10000 would guess which way of the cache would hit on each access.
Address translation
Most general purpose CPUs implement some form of virtual memory. To summarize, each program running on the machine sees its own simplified address space, which contains code and data for that program only. Each program uses this virtual address space without regard for where it exists in physical memory.
Virtual memory requires the processor to translate virtual addresses generated by the program into physical addresses in main memory. The portion of the processor that does this translation is known as the memory management unit (MMU). The fast path through the MMU can perform those translations stored in the translation lookaside buffer (TLB), which is a cache of mappings from the operating system's page table.
For the purposes of the present discussion, there are three important features of address translation:
- Latency: The physical address is available from the MMU some time, perhaps a few cycles, after the virtual address is available from the address generator.
- Aliasing: Multiple virtual addresses can map to a single physical address. Most processors guarantee that all updates to that single physical address will happen in program order. To deliver on that guarantee, the processor must ensure that only one copy of a physical address resides in the cache at any given time.
- Granularity: The virtual address space is broken up into pages. For instance, a 4 GB virtual address space might be cut up into 1048576 pages of 4 kB size, each of which can be independently mapped. There may be multiple page sizes supported; see virtual memory for elaboration.
A historical note: some early virtual memory systems were very slow, because they required an access to the page table (held in main memory) before every programmed access to main memory.[NB 1] With no caches, this effectively cut the speed of the machine in half. The first hardware cache used in a computer system was not actually a data or instruction cache, but rather a TLB.
Caches can be divided into 4 types, based on whether the index or tag correspond to physical or virtual addresses:
- Physically indexed, physically tagged (PIPT) caches use the physical address for both the index and the tag. While this is simple and avoids problems with aliasing, it is also slow, as the physical address must be looked up (which could involve a TLB miss and access to main memory) before that address can be looked up in the cache.
- Virtually indexed, virtually tagged (VIVT) caches use the virtual address for both the index and the tag. This caching scheme can result in much faster lookups, since the MMU doesn't need to be consulted first to determine the physical address for a given virtual address. However, VIVT suffers from aliasing problems, where several different virtual addresses may refer to the same physical address. The result is that such addresses would be cached separately despite referring to the same memory, causing coherency problems. Another problem is homonyms, where the same virtual address maps to several different physical addresses. It is not possible to distinguish these mappings by only looking at the virtual index, though potential solutions include: flushing the cache after a context switch, forcing address spaces to be non-overlapping, tagging the virtual address with an address space ID (ASID), or using physical tags. Additionally, there is a problem that virtual-to-physical mappings can change, which would require flushing cache lines, as the VAs would no longer be valid.
- Virtually indexed, physically tagged (VIPT) caches use the virtual address for the index and the physical address in the tag. The advantage over PIPT is lower latency, as the cache line can be looked up in parallel with the TLB translation, however the tag can't be compared until the physical address is available. The advantage over VIVT is that since the tag has the physical address, the cache can detect homonyms. VIPT requires more tag bits, as the index bits no longer represent the same address.
- Physically indexed, virtually tagged (PIVT) caches are only theoretical as they would basically be useless.
The speed of this recurrence (the load latency) is crucial to CPU performance, and so most modern level-1 caches are virtually indexed, which at least allows the MMU's TLB lookup to proceed in parallel with fetching the data from the cache RAM.
But virtual indexing is not the best choice for all cache levels. The cost of dealing with virtual aliases grows with cache size, and as a result most level-2 and larger caches are physically indexed.
Caches have historically used both virtual and physical addresses for the cache tags, although virtual tagging is now uncommon. If the TLB lookup can finish before the cache RAM lookup, then the physical address is available in time for tag compare, and there is no need for virtual tagging. Large caches, then, tend to be physically tagged, and only small, very low latency caches are virtually tagged. In recent general-purpose CPUs, virtual tagging has been superseded by vhints, as described below.
Homonym and synonym problems
The cache that relies on the virtual indexing and tagging becomes inconsistent after the same virtual address is mapped into different physical addresses (homonym). This can be solved by using physical address for tagging or by storing the address space id in the cache line. However the latter of these two approaches does not help against the synonym problem, where several cache lines end up storing data for the same physical address. Writing to such location may update only one location in the cache, leaving others with inconsistent data. This problem might be solved by using non overlapping memory layouts for different address spaces or otherwise the cache (or part of it) must be flushed when the mapping changes.
The great advantage of virtual tags is that, for associative caches, they allow the tag match to proceed before the virtual to physical translation is done. However,
- coherence probes and evictions present a physical address for action. The hardware must have some means of converting the physical addresses into a cache index, generally by storing physical tags as well as virtual tags. For comparison, a physically tagged cache does not need to keep virtual tags, which is simpler.
- When a virtual to physical mapping is deleted from the TLB, cache entries with those virtual addresses will have to be flushed somehow. Alternatively, if cache entries are allowed on pages not mapped by the TLB, then those entries will have to be flushed when the access rights on those pages are changed in the page table.
It is also possible for the operating system to ensure that no virtual aliases are simultaneously resident in the cache. The operating system makes this guarantee by enforcing page coloring, which is described below. Some early RISC processors (SPARC, RS/6000) took this approach. It has not been used recently, as the hardware cost of detecting and evicting virtual aliases has fallen and the software complexity and performance penalty of perfect page coloring has risen.
It can be useful to distinguish the two functions of tags in an associative cache: they are used to determine which way of the entry set to select, and they are used to determine if the cache hit or missed. The second function must always be correct, but it is permissible for the first function to guess, and get the wrong answer occasionally.
Some processors (e.g. early SPARCs) have caches with both virtual and physical tags. The virtual tags are used for way selection, and the physical tags are used for determining hit or miss. This kind of cache enjoys the latency advantage of a virtually tagged cache, and the simple software interface of a physically tagged cache. It bears the added cost of duplicated tags, however. Also, during miss processing, the alternate ways of the cache line indexed have to be probed for virtual aliases and any matches evicted.
The extra area (and some latency) can be mitigated by keeping virtual hints with each cache entry instead of virtual tags. These hints are a subset or hash of the virtual tag, and are used for selecting the way of the cache from which to get data and a physical tag. Like a virtually tagged cache, there may be a virtual hint match but physical tag mismatch, in which case the cache entry with the matching hint must be evicted so that cache accesses after the cache fill at this address will have just one hint match. Since virtual hints have fewer bits than virtual tags distinguishing them from one another, a virtually hinted cache suffers more conflict misses than a virtually tagged cache.
Perhaps the ultimate reduction of virtual hints can be found in the Pentium 4 (Willamette and Northwood cores). In these processors the virtual hint is effectively 2 bits, and the cache is 4-way set associative. Effectively, the hardware maintains a simple permutation from virtual address to cache index, so that no content-addressable memory (CAM) is necessary to select the right one of the four ways fetched.
Page coloring
Large physically indexed caches (usually secondary caches) run into a problem: the operating system rather than the application controls which pages collide with one another in the cache. Differences in page allocation from one program run to the next lead to differences in the cache collision patterns, which can lead to very large differences in program performance. These differences can make it very difficult to get a consistent and repeatable timing for a benchmark run.
To understand the problem, consider a CPU with a 1 MB physically indexed direct-mapped level-2 cache and 4 kB virtual memory pages. Sequential physical pages map to sequential locations in the cache until after 256 pages the pattern wraps around. We can label each physical page with a color of 0–255 to denote where in the cache it can go. Locations within physical pages with different colors cannot conflict in the cache.
A programmer attempting to make maximum use of the cache may arrange his program's access patterns so that only 1 MB of data need be cached at any given time, thus avoiding capacity misses. But he should also ensure that the access patterns do not have conflict misses. One way to think about this problem is to divide up the virtual pages the program uses and assign them virtual colors in the same way as physical colors were assigned to physical pages before. The programmer can then arrange the access patterns of his code so that no two pages with the same virtual color are in use at the same time. There is a wide literature on such optimizations (e.g. loop nest optimization), largely coming from the High Performance Computing (HPC) community.
The snag is that while all the pages in use at any given moment may have different virtual colors, some may have the same physical colors. In fact, if the operating system assigns physical pages to virtual pages randomly and uniformly, it is extremely likely that some pages will have the same physical color, and then locations from those pages will collide in the cache (this is the birthday paradox).
The solution is to have the operating system attempt to assign different physical color pages to different virtual colors, a technique called page coloring. Although the actual mapping from virtual to physical color is irrelevant to system performance, odd mappings are difficult to keep track of and have little benefit, so most approaches to page coloring simply try to keep physical and virtual page colors the same.
If the operating system can guarantee that each physical page maps to only one virtual color, then there are no virtual aliases, and the processor can use virtually indexed caches with no need for extra virtual alias probes during miss handling. Alternatively, the O/S can flush a page from the cache whenever it changes from one virtual color to another. As mentioned above, this approach was used for some early SPARC and RS/6000 designs.
Cache hierarchy in a modern processor
Modern processors have multiple interacting caches on chip.
The operation of a particular cache can be completely specified by:
- the cache size
- the cache block size
- the number of blocks in a set
- the cache set replacement policy
- the cache write policy (write-through or write-back)
While all of the cache blocks in a particular cache are the same size and have the same associativity, typically "lower-level" caches (such as the L1 cache) have a smaller size, have smaller blocks, and have fewer blocks in a set, while "higher-level" caches (such as the L3 cache) have larger size, larger blocks, and more blocks in a set.
Specialized caches
Pipelined CPUs access memory from multiple points in the pipeline: instruction fetch, virtual-to-physical address translation, and data fetch (see classic RISC pipeline). The natural design is to use different physical caches for each of these points, so that no one physical resource has to be scheduled to service two points in the pipeline. Thus the pipeline naturally ends up with at least three separate caches (instruction, TLB, and data), each specialized to its particular role.
Pipelines with separate instruction and data caches, now predominant, are said to have a Harvard architecture. Originally, this phrase referred to machines with separate instruction and data memories, which proved not at all popular. Most modern CPUs have a single-memory von Neumann architecture.
Victim cache
A victim cache is a cache used to hold blocks evicted from a CPU cache upon replacement. The victim cache lies between the main cache and its refill path, and only holds blocks that were evicted from the main cache. The victim cache is usually fully associative, and is intended to reduce the number of conflict misses. Many commonly used programs do not require an associative mapping for all the accesses. In fact, only a small fraction of the memory accesses of the program require high associativity. The victim cache exploits this property by providing high associativity to only these accesses. It was introduced by Norman Jouppi from DEC in 1990.
Trace cache
One of the more extreme examples of cache specialization is the trace cache found in the Intel Pentium 4 microprocessors. A trace cache is a mechanism for increasing the instruction fetch bandwidth and decreasing power consumption (in the case of the Pentium 4) by storing traces of instructions that have already been fetched and decoded.
The earliest widely acknowledged academic publication of trace cache was by Eric Rotenberg, Steve Bennett, and Jim Smith in their 1996 paper "Trace Cache: a Low Latency Approach to High Bandwidth Instruction Fetching."
An earlier publication is US Patent 5,381,533, "Dynamic flow instruction cache memory organized around trace segments independent of virtual address line", by Alex Peleg and Uri Weiser of Intel Corp., patent filed March 30, 1994, a continuation of an application filed in 1992, later abandoned.
A trace cache stores instructions either after they have been decoded, or as they are retired. Generally, instructions are added to trace caches in groups representing either individual basic blocks or dynamic instruction traces. A dynamic trace ("trace path") contains only instructions whose results are actually used, and eliminates instructions following taken branches (since they are not executed); a dynamic trace can be a concatenation of multiple basic blocks. This allows the instruction fetch unit of a processor to fetch several basic blocks, without having to worry about branches in the execution flow.
Trace lines are stored in the trace cache based on the program counter of the first instruction in the trace and a set of branch predictions. This allows for storing different trace paths that start on the same address, each representing different branch outcomes. In the instruction fetch stage of a pipeline, the current program counter along with a set of branch predictions is checked in the trace cache for a hit. If there is a hit, a trace line is supplied to fetch which does not have to go to a regular cache or to memory for these instructions. The trace cache continues to feed the fetch unit until the trace line ends or until there is a misprediction in the pipeline. If there is a miss, a new trace starts to be built.
Trace caches are also used in processors like the Intel Pentium 4 to store already decoded micro-operations, or translations of complex x86 instructions, so that the next time an instruction is needed, it does not have to be decoded again.
Multi-level caches
Another issue is the fundamental tradeoff between cache latency and hit rate. Larger caches have better hit rates but longer latency. To address this tradeoff, many computers use multiple levels of cache, with small fast caches backed up by larger slower caches.
Multi-level caches generally operate by checking the smallest level 1 (L1) cache first; if it hits, the processor proceeds at high speed. If the smaller cache misses, the next larger cache (L2) is checked, and so on, before external memory is checked.
As the latency difference between main memory and the fastest cache has become larger, some processors have begun to utilize as many as three levels of on-chip cache. For example, the Alpha 21164 (1995) had 1 to 64 MB off-chip L3 cache; the IBM POWER4 (2001) had off-chip L3 caches of 32 MB per processor, shared among several processors; the Itanium 2 (2003) had a 6 MB unified level 3 (L3) cache on-die; the Itanium 2 (2003) MX 2 Module incorporates two Itanium2 processors along with a shared 64 MB L4 cache on a Multi-chip module that was pin compatible with a Madison processor; Intel's Xeon MP product code-named "Tulsa" (2006) features 16 MB of on-die L3 cache shared between two processor cores; the AMD Phenom II (2008) has up to 6 MB on-die unified L3 cache; and the Intel Core i7 (2008) has an 8 MB on-die unified L3 cache that is inclusive, shared by all cores. The benefits of an L3 cache depend on the application's access patterns.
Finally, at the other end of the memory hierarchy, the CPU register file itself can be considered the smallest, fastest cache in the system, with the special characteristic that it is scheduled in software—typically by a compiler, as it allocates registers to hold values retrieved from main memory. (See especially loop nest optimization.) Register files sometimes also have hierarchy: The Cray-1 (circa 1976) had 8 address "A" and 8 scalar data "S" registers that were generally usable. There was also a set of 64 address "B" and 64 scalar data "T" registers that took longer to access, but were faster than main memory. The "B" and "T" registers were provided because the Cray-1 did not have a data cache. (The Cray-1 did, however, have an instruction cache.)
Multi-core chips
When considering a chip with multiple cores, there is a question of whether the caches should be shared or local to each core. Implementing shared cache undoubtedly introduces more wiring and complexity. But then, having one cache per chip, rather than core, greatly reduces the amount of space needed, and thus one can include a larger cache. Typically one finds that sharing L1 cache is undesirable since the latency increase is such that each core will run considerably slower than a single-core chip. But then, for the highest level (the last one called before accessing memory), having a global cache is desirable for several reasons. For example, an eight-core chip with three levels may include an L1 cache for each core, an L3 cache shared by all cores, with the L2 cache intermediate, e.g., one for each pair of cores.
Separate versus unified
In a separate cache structure, instructions and data are cached separately, meaning that a cache line is used to cache either instructions or data, but not both. In a unified one, this constraint is removed.
Exclusive versus inclusive
Multi-level caches introduce new design decisions. For instance, in some processors, all data in the L1 cache must also be somewhere in the L2 cache. These caches are called strictly inclusive. Other processors (like the AMD Athlon) have exclusive caches — data is guaranteed to be in at most one of the L1 and L2 caches, never in both. Still other processors (like the Intel Pentium II, III, and 4), do not require that data in the L1 cache also reside in the L2 cache, although it may often do so. There is no universally accepted name for this intermediate policy.
The advantage of exclusive caches is that they store more data. This advantage is larger when the exclusive L1 cache is comparable to the L2 cache, and diminishes if the L2 cache is many times larger than the L1 cache. When the L1 misses and the L2 hits on an access, the hitting cache line in the L2 is exchanged with a line in the L1. This exchange is quite a bit more work than just copying a line from L2 to L1, which is what an inclusive cache does.
One advantage of strictly inclusive caches is that when external devices or other processors in a multiprocessor system wish to remove a cache line from the processor, they need only have the processor check the L2 cache. In cache hierarchies which do not enforce inclusion, the L1 cache must be checked as well. As a drawback, there is a correlation between the associativities of L1 and L2 caches: if the L2 cache does not have at least as many ways as all L1 caches together, the effective associativity of the L1 caches is restricted. Another disadvantage of inclusive cache is that whenever there is an eviction in L2 cache, the (possibly) corresponding lines in L1 also have to get evicted in order to maintain inclusiveness. This is quite a bit work, and would result in higher L1 miss rate.
Another advantage of inclusive caches is that the larger cache can use larger cache lines, which reduces the size of the secondary cache tags. (Exclusive caches require both caches to have the same size cache lines, so that cache lines can be swapped on a L1 miss, L2 hit). If the secondary cache is an order of magnitude larger than the primary, and the cache data is an order of magnitude larger than the cache tags, this tag area saved can be comparable to the incremental area needed to store the L1 cache data in the L2.
Example: the K8
The K8 has 4 specialized caches: an instruction cache, an instruction TLB, a data TLB, and a data cache. Each of these caches is specialized:
- The instruction cache keeps copies of 64-byte lines of memory, and fetches 16 bytes each cycle. Each byte in this cache is stored in ten bits rather than 8, with the extra bits marking the boundaries of instructions (this is an example of predecoding). The cache has only parity protection rather than ECC, because parity is smaller and any damaged data can be replaced by fresh data fetched from memory (which always has an up-to-date copy of instructions).
- The instruction TLB keeps copies of page table entries (PTEs). Each cycle's instruction fetch has its virtual address translated through this TLB into a physical address. Each entry is either 4 or 8 bytes in memory. Because the K8 has a variable page size, each of the TLBs is split into two sections, one to keep PTEs that map 4 kB pages, and one to keep PTEs that map 4 MB or 2 MB pages. The split allows the fully associative match circuitry in each section to be simpler. The operating system maps different sections of the virtual address space with different size PTEs.
- The data TLB has two copies which keep identical entries. The two copies allow two data accesses per cycle to translate virtual addresses to physical addresses. Like the instruction TLB, this TLB is split into two kinds of entries.
- The data cache keeps copies of 64-byte lines of memory. It is split into 8 banks (each storing 8 kB of data), and can fetch two 8-byte data each cycle so long as those data are in different banks. There are two copies of the tags, because each 64-byte line is spread among all 8 banks. Each tag copy handles one of the two accesses per cycle.
The K8 also has multiple-level caches. There are second-level instruction and data TLBs, which store only PTEs mapping 4 kB. Both instruction and data caches, and the various TLBs, can fill from the large unified L2 cache. This cache is exclusive to both the L1 instruction and data caches, which means that any 8-byte line can only be in one of the L1 instruction cache, the L1 data cache, or the L2 cache. It is, however, possible for a line in the data cache to have a PTE which is also in one of the TLBs—the operating system is responsible for keeping the TLBs coherent by flushing portions of them when the page tables in memory are updated.
The K8 also caches information that is never stored in memory—prediction information. These caches are not shown in the above diagram. As is usual for this class of CPU, the K8 has fairly complex branch prediction, with tables that help predict whether branches are taken and other tables which predict the targets of branches and jumps. Some of this information is associated with instructions, in both the level 1 instruction cache and the unified secondary cache.
The K8 uses an interesting trick to store prediction information with instructions in the secondary cache. Lines in the secondary cache are protected from accidental data corruption (e.g. by an alpha particle strike) by either ECC or parity, depending on whether those lines were evicted from the data or instruction primary caches. Since the parity code takes fewer bits than the ECC code, lines from the instruction cache have a few spare bits. These bits are used to cache branch prediction information associated with those instructions. The net result is that the branch predictor has a larger effective history table, and so has better accuracy.
More hierarchies
These predictors are caches in that they store information that is costly to compute. Some of the terminology used when discussing predictors is the same as that for caches (one speaks of a hit in a branch predictor), but predictors are not generally thought of as part of the cache hierarchy.
The K8 keeps the instruction and data caches coherent in hardware, which means that a store into an instruction closely following the store instruction will change that following instruction. Other processors, like those in the Alpha and MIPS family, have relied on software to keep the instruction cache coherent. Stores are not guaranteed to show up in the instruction stream until a program calls an operating system facility to ensure coherency.
Cache reads are the most common CPU operation that takes more than a single cycle. Program execution time tends to be very sensitive to the latency of a level-1 data cache hit. A great deal of design effort, and often power and silicon area are expended making the caches as fast as possible.
The simplest cache is a virtually indexed direct-mapped cache. The virtual address is calculated with an adder, the relevant portion of the address extracted and used to index an SRAM, which returns the loaded data. The data is byte aligned in a byte shifter, and from there is bypassed to the next operation. There is no need for any tag checking in the inner loop — in fact, the tags need not even be read. Later in the pipeline, but before the load instruction is retired, the tag for the loaded data must be read, and checked against the virtual address to make sure there was a cache hit. On a miss, the cache is updated with the requested cache line and the pipeline is restarted.
An associative cache is more complicated, because some form of tag must be read to determine which entry of the cache to select. An N-way set-associative level-1 cache usually reads all N possible tags and N data in parallel, and then chooses the data associated with the matching tag. Level-2 caches sometimes save power by reading the tags first, so that only one data element is read from the data SRAM.
The diagram to the right is intended to clarify the manner in which the various fields of the address are used. Address bit 31 is most significant, bit 0 is least significant. The diagram shows the SRAMs, indexing, and multiplexing for a 4 kB, 2-way set-associative, virtually indexed and virtually tagged cache with 64 B lines, a 32b read width and 32b virtual address.
Because the cache is 4 kB and has 64 B lines, there are just 64 lines in the cache, and we read two at a time from a Tag SRAM which has 32 rows, each with a pair of 21 bit tags. Although any function of virtual address bits 31 through 6 could be used to index the tag and data SRAMs, it is simplest to use the least significant bits.
Similarly, because the cache is 4 kB and has a 4 B read path, and reads two ways for each access, the Data SRAM is 512 rows by 8 bytes wide.
A more modern cache might be 16 kB, 4-way set-associative, virtually indexed, virtually hinted, and physically tagged, with 32 B lines, 32b read width and 36b physical addresses. The read path recurrence for such a cache looks very similar to the path above. Instead of tags, vhints are read, and matched against a subset of the virtual address. Later on in the pipeline, the virtual address is translated into a physical address by the TLB, and the physical tag is read (just one, as the vhint supplies which way of the cache to read). Finally the physical address is compared to the physical tag to determine if a hit has occurred.
Some SPARC designs have improved the speed of their L1 caches by a few gate delays by collapsing the virtual address adder into the SRAM decoders. See Sum addressed decoder.
The early history of cache technology is closely tied to the invention and use of virtual memory. Because of scarcity and cost of semi-conductors memories, early mainframe computers in the 1960s used a complex hierarchy of physical memory, mapped onto a flat virtual memory space used by programs. The memory technologies would span semi-conductor, magnetic core, drum and disc. Virtual memory seen and used by programs would be flat and caching would be used to fetch data and instructions into the fastest memory ahead of processor access. Extensive studies were done to optimize the cache sizes. Optimal values were found to depend greatly on the programming language used with Algol needing the smallest and Fortran and Cobol needing the largest cache sizes.[disputed ]
In the early days of microcomputer technology, memory access was only slightly slower than register access. But since the 1980s the performance gap between processor and memory has been growing. Microprocessors have advanced much faster than memory, especially in terms of their operating frequency, so memory became a performance bottleneck. While it was technically possible to have all the main memory as fast as the CPU, a more economically viable path has been taken: use plenty of low-speed memory, but also introduce a small high-speed cache memory to alleviate the performance gap. This provided an order of magnitude more capacity—for the same price—with only a slightly reduced combined performance.
First TLB implementations
First data cache
In x86 microprocessors
As the x86 microprocessors reached clock rates of 20 MHz and above in the 386, small amounts of fast cache memory began to be featured in systems to improve performance. This was because the DRAM used for main memory had significant latency, up to 120 ns, as well as refresh cycles. The cache was constructed from more expensive, but significantly faster, SRAM, which at the time had latencies around 10 ns. The early caches were external to the processor and typically located on the motherboard in the form of eight or nine DIP devices placed in sockets to enable the cache as an optional extra or upgrade feature.
Some versions of the Intel 386 processor could support 16 to 64 kB of external cache.
With the 486 processor, an 8 kB cache was integrated directly into the CPU die. This cache was termed Level 1 or L1 cache to differentiate it from the slower on-motherboard, or Level 2 (L2) cache. These on-motherboard caches were much larger, with the most common size being 256 kB. The popularity of on-motherboard cache continued on through the Pentium MMX era but was made obsolete by the introduction of SDRAM and the growing disparity between bus clock rates and CPU clock rates, which caused on-motherboard cache to be only slightly faster than main memory.
The next development in cache implementation in the x86 microprocessors began with the Pentium Pro, which brought the secondary cache onto the same package as the microprocessor, clocked at the same frequency as the microprocessor.
On-motherboard caches enjoyed prolonged popularity thanks to the AMD K6-2 and AMD K6-III processors that still used the venerable Socket 7, which was previously used by Intel with on-motherboard caches. K6-III included 256 kb on-die L2 cache and took advantage of the on-board cache as a third level cache, named L3 (motherboards with up to 2 MB of on-board cache were produced). After the Socket-7 became obsolete, on-motherboard cache disappeared from the x86 systems.
The three-level cache was used again first with the introduction of multiple processor cores, where the L3 was added to the CPU die. It became common to have the three levels be larger in size than the next and today it is not uncommon to find Level 3 cache sizes of eight megabytes. This trend appears to continue for the foreseeable future.
Current research
There are several tools available to computer architects to help explore tradeoffs between cache cycle time, energy, and area. These tools include the open-source CACTI cache simulator and the open-source SimpleScalar instruction set simulator.
Multi-ported cache
A multi-ported cache is a cache which can serve more than one request at a time. When accessing a traditional cache we normally use a single memory address, whereas in a multi-ported cache we may request N addresses at a time - where N is the number of ports that connected through the processor and the cache. The benefit of this is that a pipelined processor may access memory from different phases in its pipeline. Another benefit is that it allows the concept of super-scalar processors through different cache levels.
See also
- Cache coherency
- Cache algorithms
- Dinero (Cache simulator by University of Wisconsin System)
- Instruction unit
- Memoization, briefly defined in List of computer term etymologies
- No-write allocation
- Scratchpad RAM
- Write buffer
- John L. Hennessy, David A. Patterson. "Computer Architecture: A Quantitative Approach". 2011. ISBN 0-12-383872-X, ISBN 978-0-12-383872-8. page B-9.
- David A. Patterson, John L. Hennessy. "Computer organization and design: the hardware/software interface". 2009. ISBN 0-12-374493-8, ISBN 978-0-12-374493-7 "Chapter 5: Large and Fast: Exploiting the Memory Hierarchy". p. 484.
- Gene Cooperman. "Cache Basics". 2003.
- Ben Dugan. "Concerning Caches". 2002.
- Harvey G. Cragon. "Memory systems and pipelined processors". 1996. ISBN 0-86720-474-5, ISBN 978-0-86720-474-2. "Chapter 4.1: Cache Addressing, Virtual or Real" p. 209
- André Seznec. "A Case for Two-Way Skewed-Associative Caches". doi:10.1145/173682.165152.
- "Advanced Caching Techniques" by C. Kozyrakis
- Micro-Architecture "Skewed-associative caches have ... major advantages over conventional set-associative caches."
- "Cache performance of SPEC CPU2000". Cs.wisc.edu. Retrieved 2010-05-02.
- Sumner, F. H.; Haley, G.; Chenh, E. C. Y. (1962), "The Central Control Unit of the 'Atlas' Computer", Information Processing 1962, IFIP Congress Proceedings, Proceedings of IFIP Congress 62, Spartan.
- Kilburn, T.; Payne, R. B.; Howarth, D. J. (December 1961), "The Atlas Supervisor", Computers - Key to Total Systems Control, Conferences Proceedings, 20 Proceedings of the Eastern Joint Computer Conference Washington, D.C., Macmillan, pp. 279–294.
- "Understanding Caching". Linux Journal. Retrieved 2010-05-02.
- N.P.Jouppi. "Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers." - 17th Annual International Symposium on Computer Architecture, 1990. Proceedings., doi:10.1109/ISCA.1990.134547
- "Trace Cache: a Low Latency Approach to High Bandwidth Instruction Fetching.". 1996. doi:10.1109/MICRO.1996.566447.
- "AMD K8". Sandpile.org. Retrieved 2007-06-02.
- "The Processor-Memory performance gap". acm.org. Retrieved 2007-11-08.
- GE (January 1968), GE-645 System Manual.
- IBM (February, 1972), IBM System/360 Model 67 Functional Characteristics, Third Edition, GA27-2719-2.
- IBM (June, 1968), IBM System/360 Model 85 Functional Characteristics, SECOND EDITION, A22-6916-1.
- "Chip Design Thwarts Sneak Attack on Data" by Sally Adee 2009 discusses "A novel cache architecture with enhanced performance and security" by Zhenghong Wang and Ruby B. Lee: (abstract) "Caches ideally should have low miss rates and short access times, and should be power efficient at the same time. Such design goals are often contradictory in practice."
- "CACTI". Hpl.hp.com. Retrieved 2010-05-02.
|The Wikibook Microprocessor Design has a page on the topic of: Cache|
- Memory part 2: CPU caches An article on lwn.net by Ulrich Drepper describing CPU caches in detail.
- Evaluating Associativity in CPU Caches — Hill and Smith — 1989 — Introduces capacity, conflict, and compulsory classification.
- Cache Performance for SPEC CPU2000 Benchmarks — Hill and Cantin — 2003 — This reference paper has been updated several times. It has thorough and lucidly presented simulation results for a reasonably wide set of benchmarks and cache organizations.
- Memory Hierarchy in Cache-Based Systems, by Ruud van der Pas, 2002, Sun Microsystems, is a nice introductory article to CPU memory caching.
- A Cache Primer by Paul Genua, P.E., 2004, Freescale Semiconductor, another introductory article.
- An 8-way set-associative cache written in VHDL
- Understanding CPU caching and performance An article on Ars Technica by Jon Stokes. | http://en.wikipedia.org/wiki/CPU_cache | 13 |
72 | - Not to be confused with the spheromak, another topic in fusion research.
A spherical tokamak is a type of fusion power device based on the tokamak principle. It is notable for its very narrow profile, or "aspect ratio". A traditional tokamak has a toroidal confinement area that gives it an overall shape similar to a donut, complete with a large hole in the middle. The spherical tokamak reduces the size of the hole almost to zero, resulting in a plasma shape that is almost spherical, often compared with a cored apple. The spherical tokamak is sometimes referred to as a spherical torus and often shortened to ST.
The spherical tokamak is an offshoot of the conventional tokamak design. Proponents claim that it has a number of practical advantages over these devices, some of them dramatic. For this reason the ST has seen considerable interest since it was introduced in the late 1980s. However, development remains effectively one generation behind mainline efforts like JET. Major experiments in the field include the pioneering START and MAST at Culham in the UK, the US's NSTX, and numerous others.
Further theoretical work has cast some doubt on the use of spherical tokamaks as a route to lower cost power producing reactors. Further research is needed to better understand the "scaling laws" associated with this design. Even in the event that spherical tokamaks do not lead to lower cost approaches to generation, they are still lower cost in general; this makes them attractive devices for general plasma physics uses, or as concentrated high-energy neutron sources.
Aspect ratio
Fusion reactor efficiency is based on the amount of power released from fusion reactions compared with the power needed to keep the plasma hot. This can be calculated from three key measures; the temperature of the plasma, its density, and the length of time the reaction is maintained. The product of these three measures is the "fusion triple product", and in order to be economic it must reach the Lawson criterion, ≥3 • 1021 keV • seconds / m³.
Tokamaks are the leading approach within the larger group of magnetic fusion energy (MFE) designs, all of which attempt to confine a plasma using powerful magnetic fields. In the MFE approach, it is the time axis that is considered most important for ongoing development. Tokamaks confine their fuel at low pressure (around 1/millionth of atmospheric) but high temperatures (150 million Celsius), and attempt to keep those conditions stable for increasing times on the order of seconds to minutes.
A key measure of MFE reactor economics is "beta", β, the ratio of plasma pressure to the magnetic pressure. Improving beta means that you need to use, in relative terms, less energy to generate the magnetic fields for any given plasma pressure (or density). The price of magnets scales roughly with β½, so reactors operating at higher betas are less expensive for any given level of confinement. Tokamaks operate at relatively low betas, a few %, and generally require superconducting magnets in order to have enough field strength to reach useful densities.
The limiting factor in reducing beta is the size of the magnets. Tokamaks use a series of ring-shaped magnets around the confinement area, and their physical dimensions mean that the hole in the middle of the torus can be reduced only so much before the magnet windings are touching. This limits the aspect ratio, A, of the reactor to about 2.5; the diameter of the reactor as a whole could be about 2.5 times the cross-sectional diameter of the confinement area. Some experimental designs were slightly under this limit, while many reactors had much higher A.
Reducing A
During the 1980s, researchers at Oak Ridge National Laboratory (ORNL), led by Ben Carreras and Tim Hender, were studying the operations of tokamaks as A was reduced. They noticed, based on magnetohydrodynamic considerations, that tokamaks were inherently more stable at low aspect ratios. In particular, the classic "kink instability" was strongly suppressed. Other groups expanded on this body of theory, and found that the same was true for the high-order ballooning instability as well. This suggested that a low-A machine would not only be less expensive to build, but have better performance as well.
One way to reduce the size of the magnets is to re-arrange them around the confinement area. This was the idea behind the "compact tokamak" designs, typified by the Alcator C-Mod, Riggatron and IGNITOR. The later two of these designs place the magnets inside the confinement area, so the toroidal vacuum vessel can be replaced with a cylinder. The decreased distance between the magnets and plasma leads to much higher betas, so conventional (non-superconducting) magnets could be used. The downside to this approach, one that was widely criticized, is that it places the magnets directly in the high-energy neutron flux of the fusion reactions. In operation the magnets would be rapidly eroded, requiring the vacuum vessel to be opened and the entire magnet assembly replaced after a month or so of operation.
Around the same time, several advances in plasma physics were making their way through the fusion community. Of particular importance were the concepts of elongation and triangularity, referring to the cross-sectional shape of the plasma. Early tokamaks had all used circular cross-sections simply because that was the easiest to model and build, but over time it became clear that C or (more commonly) D-shaped plasma cross-sections led to higher performance. This produces plasmas with high "shear", which distributed and broke up turbulent eddies in the plasma. These changes led to the "advanced tokamak" designs, which include ITER.
Spherical tokamaks
In 1984, Martin Peng of ORNL proposed an alternate arrangement of the magnet coils that would greatly reduce the aspect ratio while avoiding the erosion issues of the compact tokamak. Instead of wiring each magnet coil separately, he proposed using a single large conductor in the center, and wiring the magnets as half-rings off of this conductor. What was once a series of individual rings passing through the hole in the center of the reactor was reduced to a single post, allowing for aspect ratios as low as 1.2. This means that ST's can reach the same operational triple product numbers as conventional designs using one tenth the magnetic field.
The design, naturally, also included the advances in plasma shaping that were being studied concurrently. Like all modern designs, the ST uses a D-shaped plasma cross section. If you consider a D on the right side and a reversed D on the left, as the two approach each other (as A is reduced) eventually the vertical surfaces touch and the resulting shape is a circle. In 3D, the outer surface is roughly spherical. They named this layout the "spherical tokamak", or ST. These studies suggested that the ST layout would include all the qualities of the advanced tokamak, the compact tokamak, would strongly suppress several forms of turbulence, reach high β, have high self-magnetism and be less costly to build.
The ST concept appeared to represent an enormous advance in tokamak design. However, it was being proposed during a period when US fusion research budgets were being dramatically scaled back. ORNL was provided with funds to develop a suitable central column built out of a high-strength copper alloy called "Glidcop". However, they were unable to secure funding to build a demonstration machine, "STX".
From spheromak to ST
Failing to build an ST at ORNL, Peng began a worldwide effort to interest other teams in the ST concept and get a test machine built. One way to do this quickly would be to convert a spheromak machine to the ST layout.
Spheromaks are essentially "smoke rings" of plasma that are internally self-stable. They can, however, drift about within their confinement area. The typical solution to this problem was to wrap the area in a sheet of copper, or more rarely, place a copper conductor down the center. When the spheromak approaches the conductor, a magnetic field is generated that pushes it away again. A number of experimental spheromak machines were built in the 1970s and early 80s, but demonstrated performance that simply was not interesting enough to suggest further development.
Machines with the central conductor had a strong mechanical resemblance to the ST design, and could be converted with relative ease. The first such conversion was made to the Heidelberg Spheromak Experiment, or HSE. Built at Heidelberg University in the early 1980s, HSE was quickly converted to a ST in 1987 by adding new magnets to the outside of the confinement area and attaching them to its central conductor. Although the new configuration only operated "cold", far below fusion temperatures, the results were promising and demonstrated all of the basic features of the ST.
Several other groups with spheromak machines made similar conversions, notably the rotamak at the Australian Nuclear Science and Technology Organisation and the SPHEX machine. In general they all found an increase in performance of a factor of two or more. This was an enormous advance, and the need for a purpose-built machine became pressing.
START and newer systems
Peng's advocacy also caught the interest of Derek Robinson, of the United Kingdom Atomic Energy Authority (UKAEA) fusion center at Culham. What is today known as the Culham Centre for Fusion Energy was set up in the 1960s to gather together all of the UK's fusion research, formerly spread across several sites, and Robinson had recently been promoted to running several projects at the site.
Robinson was able to gather together a team and secure funding on the order of 100,000 pounds to build an experimental machine, the Small Tight Aspect Ratio Tokamak, or START. Several parts of the machine were recycled from earlier projects, while others were loaned from other labs, including a 40 keV neutral beam injector from ORNL. Before it started operation there was considerable uncertainty about its performance, and predictions that the project would be shut down if confinement proved to be similar to spheromaks.
Construction of START began in 1990, it was assembled rapidly and started operation in January 1991. Its earliest operations quickly put any theoretical concerns to rest. Using ohmic heating alone, START demonstrated betas as high as 12%, almost matching the record of 12.6% on the DIII-D machine. The results were so good that an additional 10 million pounds of funding was provided over time, leading to a major re-build in 1995. When neutral beam heating was turned on, beta jumped to 40%, beating any conventional design by 3 times.
Additionally, START demonstrated excellent plasma stability. A practical rule of thumb in conventional designs is that as the operational beta approaches a certain value normalized for the machine size, ballooning instability destabilizes the plasma. This so-called "Troyon limit" is normally 4, and generally limited to about 3.5 in real world machines. START improved this dramatically to 6. The limit depends on size of the machine, and indicates that machines will have to be built of at least a certain size if they wish to reach some performance goal. With START's much higher scaling, the same limits would be reached with a smaller machine.
Rush to build STs
START proved Peng and Strickler's predictions; the ST had performance an order of magnitude better than conventional designs, and cost much less to build as well. In terms of overall economics, the ST was an enormous step forward.
Moreover, the ST was a new approach, and a low-cost one. It was one of the few areas of mainline fusion research where real contributions could be made on small budgets. This sparked off a series of ST developments around the world. In particular, the National Spherical Torus Experiment (NSTX) and Pegasus experiments in the US, Globus-M in Russia, and the UK's follow-on to START, MAST. START itself found new life as part of the Proto-Sphera project in Italy, where experimenters are attempting to eliminate the central column by passing the current through a secondary plasma.
Tokamak reactors consist of a toroidal vacuum tube surrounded by a series of magnets. One set of magnets is logically wired in a series of rings around the outside of the tube, but are physically connected through a common conductor in the center. The central column is also normally used to house the solenoid that forms the inductive loop for the ohmic heating system (and pinch current).
The canonical example of the design can be seen in the small tabletop ST device made at Flinders University, which uses a central column made of copper wire wound into a solenoid, return bars for the toroidal field made of vertical copper wires, and a metal ring connecting the two and providing mechanical support to the structure.
Stability within the ST
Advances in plasma physics in the 1970s and 80s led to a much stronger understanding of stability issues, and this developed into a series of "scaling laws" that can be used to quickly determine rough operational numbers across a wide variety of systems. In particular, Troyon's work on the critical beta of a reactor design is considered one of the great advances in modern plasma physics. Troyon's work provides a beta limit where operational reactors will start to see significant instabilities, and demonstrates how this limit scales with size, layout, magnetic field and current in the plasma.
However, Troyon's work did not consider extreme aspect ratios, work that was later carried out by a group at the Princeton Plasma Physics Laboratory. This starts with a development of a useful beta for a highly asymmetric volume:
Where is the volume averaged magnetic field (as opposed to Troyon's use of the field in the vacuum outside the plasma, ). Following Freidberg, this beta is then fed into a modified version of the safety factor:
Where is the vacuum magnetic field, a is the minor radius, the major radius, the plasma current, and the elongation. In this definition it should be clear that decreasing aspect ratio, leads to higher average safety factors. These definitions allowed the Princeton group to develop a more flexible version of Troyon's critical beta:
Where is the inverse aspect ratio and is a constant scaling factor that is about 0.03 for any greater than 2. Note that the critical beta scales with aspect ratio, although not directly, because also includes aspect ratio factors. Numerically, it can be shown that is maximized for:
Using this in the critical beta formula above:
For a spherical tokamak with an elongation of 2 and an aspect ratio of 1.25:
Now compare this to a traditional tokamak with the same elongation and a major radius of 5 meters and minor radius of 2 meters:
The linearity of with aspect ratio is evident.
Power scaling
Beta is an important measure of performance, but in the case of a reactor designed to produce electricity, there are other practical issues that have to be considered. Among these is the power density, which offers an estimate of the size of the machine needed for a given power output. This is, in turn, a function of the plasma pressure, which is in turn a function of beta. At first glance it might seem that the ST's higher betas would naturally lead to higher allowable pressures, and thus higher power density. However, this is only true if the magnetic field remains the same – beta is the ratio of magnetic to plasma density.
If one imagines a toroidal confinement area wrapped with ring-shaped magnets, it is clear that the magnetic field is greater on the inside radius than the outside - this is the basic stability problem that the tokamak's electrical current addresses. However, the difference in that field is a function of aspect ratio; an infinitely large toroid would approximate a straight solenoid, while an ST maximizes the difference in field strength. Moreover, as there are certain aspects of reactor design that are fixed in size, the aspect ratio might be forced into certain configurations. For instance, production reactors would use a thick "blanket" containing lithium around the reactor core in order to capture the high-energy neutrons being released, both to protect the rest of the reactor mass from these neutrons as well as produce tritium for fuel. The size of the blanket is a function of the neutron's energy, which is 14 MeV in the D-T reaction regardless of the reactor design, Thus the blanket would be the same for a ST or traditional design, about a meter across.
In this case further consideration of the overall magnetic field is needed when considering the betas. Working inward through the reactor volume toward the inner surface of the plasma we would encounter the blanket, "first wall" and several empty spaces. As we move away from the magnet, the field reduces in a roughly linear fashion. If we consider these reactor components as a group, we can calculate the magnetic field that remains on the far side of the blanket, at the inner face of the plasma:
Now we consider the average plasma pressure that can be generated with this magnetic field. Following Freidberg:
In an ST, where were are attempting to maximize as a general principle, one can eliminate the blanket on the inside face and leave the central column open to the neutrons. In this case, is zero. Considering a central column made of copper, we can fix the maximum field generated in the coil, to about 7.5 T. Using the ideal numbers from the section above:
Now consider the conventional design as above, using superconducting magnets with a of 15 T, and a blanket of 1.2 meters thickness. First we calculate to be 1/(5/2) = 0.4 and to be 1.5/5 = 0.24, then:
So in spite of the higher beta in the ST, the overall power density is lower, largely due to the use of superconducting magnets in the traditional design. This issue has led to considerable work to see if these scaling laws hold for the ST, and efforts to increase the allowable field strength through a variety of methods. Work on START suggests that the scaling factors are much higher in ST's, but this work needs to be replicated at higher powers to better understand the scaling.
ST's have two major advantages over conventional designs.
The first is practical. Using the ST layout places the toroidal magnets much closer to the plasma, on average. This greatly reduces the amount of energy needed to power the magnets in order to reach any particular level of magnetic field within the plasma. Smaller magnets cost less, reducing the cost of the reactor. The gains are so great that superconducting magnets may not be required, leading to even greater cost reductions. START placed the secondary magnets inside the vacuum chamber, but in modern machines these have been moved outside and can be superconducting.
The other advantages have to do with the stability of the plasma. Since the earliest days of fusion research, the problem in making a useful system has been a number of plasma instabilities that only appeared as the operating conditions moved ever closer to useful ones for fusion power. In 1954 Edward Teller hosted a meeting exploring some of these issues, and noted that he felt plasmas would be inherently more stable if they were following convex lines of magnetic force, rather than concave. It was not clear at the time if this manifested itself in the real world, but over time the wisdom of these words become apparent.
In the tokamak, stellarator and most pinch devices, the plasma is forced to follow helical magnetic lines. This alternately moves the plasma from the outside of the confinement area to the inside. While on the outside, the particles are being pushed inward, following a concave line. As they move to the inside they are being pushed outward, following a convex line. Thus, following Teller's reasoning, the plasma is inherently more stable on the inside section of the reactor. In practice the actual limits are suggested by the "safety factor", q, which vary over the volume of the plasma.
In a traditional circular cross-section tokamak, the plasma spends about the same time on the inside and the outside of the torus; slightly less on the inside because of the shorter radius. In the advanced tokamak with a D-shaped plasma, the inside surface of the plasma is significantly enlarged and the particles spend more time there. However, in a normal high-A design, q varies only slightly as the particle moves about, as the relative distance from inside the outside is small compared to the radius of the machine as a whole (the definition of aspect ratio). In an ST machine, the variance from "inside" to "outside" is much larger in relative terms, and the particles spend much more of their time on the "inside". This leads to greatly improved stability.
It is possible to build a traditional tokamak that operates at higher betas, through the use of more powerful magnets. To do this, the current in the plasma must be increased in order to generate the toroidal magnetic field of the right magnitude. This drives the plasma ever closer to the Troyon limits where instabilities set in. The ST design, through its mechanical arrangement, has much better q and thus allows for much more magnetic power before the instabilities appear. Conventional designs hit the Troyon limit around 3.5, whereas START demonstrated operation at 6.
The ST has three distinct disadvantages compared to "conventional" advanced tokamaks with higher aspect ratios.
The first issue is that the overall pressure of the plasma in an ST is lower than conventional designs, in spite of higher beta. This is due to the limits of the magnetic field on the inside of the plasma, This limit is theoretically the same in the ST and conventional designs, but as the ST has a much higher aspect ratio, the effective field changes more dramatically over the plasma volume.
The second issue is both an advantage and disadvantage. The ST is so small, at least in the center, that there is little or no room for superconducting magnets. This is not a deal-breaker for the design, as the fields from conventional copper wound magnets is enough for the ST design. However, this means that power dissipation in the central column will be considerable. Engineering studies suggest that the maximum field possible will be about 7.5 T, much lower than is possible with a conventional layout. This places a further limit on the allowable plasma pressures. However, the lack of superconducting magnets greatly lowers the price of the system, potentially offsetting this issue economically.
The lack of shielding also means the magnet is directly exposed to the interior of the reactor. It is subject to the full heating flux of the plasma, and the neutrons generated by the fusion reactions. In practice, this means that the column would have to be replaced fairly often, likely on the order of a year, greatly affecting the availability of the reactor. In production settings, the availability is directly related to the cost of electrical production. Experiments are underway to see if the conductor can be replaced by a z-pinch plasma or liquid metal conductor in its place.
Finally, the highly asymmetrical plasma cross sections and tightly wound magnetic fields require very high toroidal currents to maintain. Normally this would require large amounts of secondary heating systems, like neutral beam injection. These are energetically expensive, so the ST design relies on high bootstrap currents for economical operation. Luckily, high elongation and triangularity are the features that give rise to these currents, so it is possible that the ST will actually be more economical in this regard. This is an area of active research.
List of operational ST machines
- MAST, Culham Science Center, United Kingdom
- NSTX, Princeton Plasma Physics Laboratory, United States
- Globus-M, Ioffe Institute, Russia
- Proto-Sphera (formerly START), ENEA, Italy
- TST-2, University of Tokyo, Japan
- SUNIST, Tsinghua University, China
- PEGASUS, University of Wisconsin-Madison, United States
- ETE, National Space Research Institute, Brazil
- John Lawson, "Some Criteria for a Power Producing Thermonuclear Reactor", Proceedings of the Physical Society B, Volume 70 (1957), p. 6
- Sykes 2008, pg. 41
- Many advanced tokamak designs routinely hit numbers on the order of ~ 1 × 1021 keV • seconds / m³, see "Fusion Triple Product and the Density Limit of High-Density Internal Diffusion Barrier Plasmas in LHD", 35th EPS Conference on Plasma Phys. Hersonissos, 9–13 June 2008
- John Wesson and David Campbell, "Tokamaks", Clarendon Press, 2004, pg. 115
- Sykes 1997, pg. B247
- Sykes 2008, pg. 10
- D.L. Jasby, "Selection of a toroidal fusion reactor concept for a magnetic fusion production reactor", Journal of Fusion Energy, Volume 6 Number 1 (1987), pg. 65
- "Evaluation of Riggatron Concept", Office of Naval Research
- Charles Kessel, "What's an Advanced Tokamak", Spring APS, Philadelphia, 2003
- Y-K Martin Peng, "Spherical Torus, Compact Fusion at Low Yield"., ORNL/FEDC-87/7 (December 1984)
- Braams and Scott, pg. 225
- Y-K Martin Peng, "Compact DT Fusion Spherical Tori at Modest Fields", CONF-850610-37 (December 1985)
- T.J. McManamy et all, "STX Magnet Fabrication and Testing to 18T", Martin Marietta Energy Systems, December 1988
- Sykes 2008, pg. 11
- Alan Sykes et all, "First results from the START experiment", Nuclear Fusion, Volume 32 Number 4 (1992), pg. 694
- Sykes 1998, pg. 1
- "Derek Robinson: Physicist devoted to creating a safe form of energy from fusion" The Sunday Times, 11 December 2002
- Sykes 1997, pg. B248
- Sykes 2008, pg. 29
- Sykes 1998, pg. 4
- Sykes 2008, pg. 18
- See images in Sykes 2008, pg. 20
- Freidberg, pg. 414
- Freidberg, pg. 413
- Sykes 2008, pg. 24
- See examples, Sykes 2008, pg. 13
- Robin Herman, "Fusion: The Search for Endless Energy", Cambridge University Press, 1990, pg. 30
- Freidberg 2007, p. 287.
- Freidberg, pg. 412
- Sykes 2008, p. 43.
- Paolo Micozzi et all, "Ideal MHD stability limits of the PROTO-SPHERA configuration", Nuclear Fusion, Volume 50 Number 9 (September 2010)
- Yican Wu et all, "Conceptual study on liquid metal center conductor post in spherical tokamak reactors", Fusion Engineering and Design, Volumes 51-52 (November 2000), pg. 395-399
- Sykes 2008, p. 31.
- C.M. Braams and P.E. Scott, "Nuclear Fusion: Half a Century of Magnetic Confinement Fusion Research", Taylor & Francis, 2002, ISBN 0-7503-0705-6
- Jeffrey Freidberg, "Plasma Physics and Fusion Energy", Cambridge University Press, 2007
- Alan Sykes et al. (Sykes 1997), "High-β performance of the START spherical tokamak", Plasma Physics and Controlled Fusion, Volume 39 (1997), B247–B260
- Alan Sykes (Sykes 2008), "The Development of the Spherical Tokamak", ICPP, Fukuoka September 2008
- Alan Sykes (Sykes 1998), "The Spherical Tokamak Programme at Culham", EURATOM/UKAEA, 20 November 1998
- Spherical Tokamaks – list of ST experiments at tokamak.info
- Culham Centre for Fusion Energy – spherical tokamaks at Culham, UK, including details of the MAST and START experiments | http://en.wikipedia.org/wiki/Spherical_tokamak | 13 |
56 | The Universe loves to fool our eyes, giving the impression that celestial objects are located at the same distance from Earth. A good example can be seen in this spectacular image produced by the NASA/ESA Hubble Space Telescope. The galaxies NGC 5011B and NGC 5011C are imaged against a starry background.
Located in the constellation of Centaurus, the nature of these galaxies has puzzled astronomers. NGC 5011B (on the right) is a spiral galaxy belonging to the Centaurus Cluster of galaxies lying 156 million light-years away from the Earth. Long considered part of the faraway cluster of galaxies as well, NGC 5011C (the bluish galaxy at the centre of the image) is a peculiar object, with the faintness typical of a nearby dwarf galaxy, alongside the size of an early-type spiral.
Astronomers were curious about the appearance of NGC 5011C. If the two galaxies were at roughly the same distance from Earth, they would expect the pair to show signs of interactions between them. However, there was no visual sign of interaction between the two. How could this be possible?
To solve this problem, astronomers studied the velocity at which these galaxies are receding from the Milky Way and found that NGC 5011C is moving away far more slowly than its apparent neighbour, and its motion is more consistent with that of the nearby Centaurus A group at a distance of 13 million light-years. Thus, NGC 5011C, with only about ten million times the mass of the Sun in its stars, must indeed be a nearby dwarf galaxy rather than member of the distant Centaurus Cluster as was believed for many years.
This image was taken with Hubble’s Advanced Camera for Surveys using visual and infrared filters.
The NASA/ESA Hubble Space Telescope provides us this week with a spectacular image of the bright star-forming ring that surrounds the heart of the barred spiral galaxy NGC 1097. In this image, the larger-scale structure of the galaxy is barely visible: its comparatively dim spiral arms, which surround its heart in a loose embrace, reach out beyond the edges of this frame.
This face-on galaxy, lying 45 million light-years away from Earth in the southern constellation of Fornax (The Furnace), is particularly attractive for astronomers. NGC 1097 is a Seyfert galaxy. Lurking at the very centre of the galaxy, a supermassive black hole 100 million times the mass of our Sun is gradually sucking in the matter around it. The area immediately around the black hole shines powerfully with radiation coming from the material falling in.
The distinctive ring around the black hole is bursting with new star formation due to an inflow of material toward the central bar of the galaxy. These star-forming regions are glowing brightly thanks to emission from clouds of ionised hydrogen. The ring is around 5000 light-years across, although the spiral arms of the galaxy extend tens of thousands of light-years beyond it.
NGC 1097 is also pretty exciting for supernova hunters. The galaxy experienced three supernovae (the violent deaths of high-mass stars) in the 11-year span between 1992 and 2003. This is definitely a galaxy worth checking on a regular basis.
However, what it is really exciting about NGC 1097 is that it is not wandering alone through space. It has two small galaxy companions, which dance “the dance of stars and the dance of space” like the gracious dancer of the famous poem The Dancer by Khalil Gibran.
The satellite galaxies are NGC 1097A, an elliptical galaxy orbiting 42 000 light-years from the centre of NGC 1097 and a small dwarf galaxy named NGC 1097B. Both galaxies are located out beyond the frames of this image and they cannot be seen. Astronomers have indications that NGC 1097 and NGC 1097A have interacted in the past.
This picture was taken with Hubble’s Advanced Camera for Surveys using visual and infrared filters.
A version of this image was submitted to the Hubble’s Hidden Treasures image processing competition by contestant Eedresha Sturdivant.
Like finding a silver needle in the haystack of space, the NASA/ESA Hubble Space Telescope has produced this beautiful image of the spiral galaxy IC 2233, one of the flattest galaxies known.
Typical spiral galaxies like the Milky Way are usually made up of three principal visible components: the disc where the spiral arms and most of the gas and dust is concentrated; the halo, a rough and sparse sphere around the disc that contains little gas, dust or star formation; and the central bulge at the heart of the disc, which is formed by a large concentration of ancient stars surrounding the Galactic Centre.
However, IC 2233 is far from being typical. This object is a prime example of a super-thin galaxy, where the galaxy’s diameter is at least ten times larger than the thickness. These galaxies consist of a simple disc of stars when seen edge on. This orientation makes them fascinating to study, giving another perspective on spiral galaxies. An important characteristic of this type of objects is that they have a low brightness and almost all of them have no bulge at all.
The bluish colour that can be seen along the disc gives evidence of the spiral nature of the galaxy, indicating the presence of hot, luminous, young stars, born out of clouds of interstellar gas. In addition, unlike typical spirals, IC 2233 shows no well-defined dust lane. Only a few small patchy regions can be identified in the inner regions both above and below the galaxy’s mid-plane.
Lying in the constellation of Lynx, IC 2233 is located about 40 million light-years away from Earth. This galaxy was discovered by British astronomer Isaac Roberts in 1894.
This image was taken with the Hubble’s Advanced Camera for Surveys, combining visible and infrared exposures. The field of view in this image is approximately 3.4 by 3.4 arcminutes.
A version of this image was entered into the Hubble's Hidden Treasures image processing competition by contestant Luca Limatola.
Located in a relatively vacant region of space about 4200 light-years away and difficult to see using an amateur telescope, the lonesome planetary nebula NGC 7354 is often overlooked. However, thanks to this image captured by the NASA/ESA Hubble Space Telescope we are able to see this brilliant ball of smoky light in spectacular detail.
Just as shooting stars are not actually stars and lava lamps do not actually contain lava, planetary nebulae have nothing to do with planets. The name was coined by Sir William Herschel because when he first viewed a planetary nebula through a telescope, he could only identify a hazy smoky sphere, similar to gaseous planets such as Uranus. The name has stuck even though modern telescopes make it obvious that these objects are not planets at all, but the glowing gassy outer layers thrown off by a hot dying star.
It is believed that winds from the central star play an important role in determining the shape and morphology of planetary nebulae. The structure of NGC 7354 is relatively easy to distinguish. It consists of a circular outer shell, an elliptical inner shell, a collection of bright knots roughly concentrated in the middle and two symmetrical jets shooting out from either side. Research suggests that these features could be due to a companion central star, however the presence of a second star in NGC 7354 is yet to be confirmed.
NGC 7354 resides in Cepheus, a constellation named after the mythical King Cepheus of Aethiopia and is about half a light-year in diameter.
A version of this image was entered into the Hubble’s Hidden Treasures image processing competition by contestant Bruno Conti.
The brilliant cascade of stars through the middle of this image is the galaxy ESO 318-13 as seen by the NASA/ESA Hubble Space Telescope. Despite being located millions of light-years from Earth, the stars captured in this image are so bright and clear you could almost attempt to count them.
Although ESO 318-13 is the main event in this image, it is sandwiched between a vast collection of bright celestial objects. Several stars near and far dazzle in comparison to the neat dusting contained within the galaxy. One that particularly stands out is located near the centre of the image, and looks like an extremely bright star located within the galaxy. This is, however, a trick of perspective. The star is located in the Milky Way, our own galaxy, and it shines so brightly because it is so much closer to us than ESO 318-13.
There are also a number of tiny glowing discs scattered throughout the frame that are more distant galaxies. In the top right corner, an elliptical galaxy can be clearly seen, a galaxy which is much larger but more distant than ESO 318-13. More interestingly, peeking through the ESO 318-13, near the right-hand edge of the image, is a distant spiral galaxy.
Galaxies are largely made up of empty space; the stars within them only take up a small volume, and providing a galaxy is not too dusty, it can be largely transparent to light coming from the background. This makes overlapping galaxies like these quite common. One particularly dramatic example of this phenomenon is the galaxy pair NGC 3314 (heic1208).
The NASA/ESA Hubble Space Telescope provides us this week with an impressive image of the irregular galaxy NGC 5253.
NGC 5253 is one of the nearest of the known Blue Compact Dwarf (BCD) galaxies, and is located at a distance of about 12 million light-years from Earth in the southern constellation of Centaurus. The most characteristic signature of these galaxies is that they harbour very active star-formation regions. This is in spite of their low dust content and comparative lack of elements heavier than hydrogen and helium, which are usually the basic ingredients for star formation.
These galaxies contain molecular clouds that are quite similar to the pristine clouds that formed the first stars in the early Universe, which were devoid of dust and heavier elements. Hence, astronomers consider the BCD galaxies to be an ideal testbed for better understanding the primordial star-forming process.
NGC 5253 does contain some dust and heavier elements, but significantly less than the Milky Way galaxy. Its central regions are dominated by an intense star forming region that is embedded in an elliptical main body, which appears red in Hubble’s image. The central starburst zone consists of a rich environment of hot, young stars concentrated in star clusters, which glow in blue in the image. Traces of the starburst itself can be seen as a faint and diffuse glow produced by the ionised oxygen gas.
The true nature of BCD galaxies has puzzled astronomers for a long time. Numerical simulations following the current leading cosmological theory of galaxy formation, known as the Lambda Cold Dark Matter model, predict that there should be far more satellite dwarf galaxies orbiting big galaxies like the Milky Way. Astronomers refer to this discrepancy as the Dwarf Galaxy Problem.
This galaxy is considered part of the Centaurus A/Messier 83 group of galaxies, which includes the famous radio galaxy Centaurus A and the spiral galaxy Messier 83. Astronomers have pointed out the possibility that the peculiar nature of NGC 5253 could result from a close encounter with Messier 83, its closer neighbour.
This image was taken with the Hubble’s Advanced Camera for Surveys, combining visible and infrared exposures. The field of view in this image is approximately 3.4 by 3.4 arcminutes.
A version of this image was entered into the Hubble’s Hidden Treasures image processing competition by contestant Nikolaus Sulzenauer.
The NASA/ESA Hubble Space Telescope has spotted the spiral galaxy ESO 499-G37, seen here against a backdrop of distant galaxies, scattered with nearby stars.
The galaxy is viewed from an angle, allowing Hubble to reveal its spiral nature clearly. The faint, loose spiral arms can be distinguished as bluish features swirling around the galaxy’s nucleus. This blue tinge emanates from the hot, young stars located in the spiral arms. The arms of a spiral galaxy have large amounts of gas and dust, and are often areas where new stars are constantly forming.
The galaxy’s most characteristic feature is a bright elongated nucleus. The bulging central core usually contains the highest density of stars in the galaxy, where typically a large group of comparatively cool old stars are packed in this compact, spheroidal region.
One feature common to many spiral galaxies is the presence of a bar running across the centre of the galaxy. These bars are thought to act as a mechanism that channels gas from the spiral arms to the centre, enhancing the star formation.
Recent studies suggest that ESO 499-G37’s nucleus sits within a small bar up to a few hundreds of light-years along, about a tenth the size of a typical galactic bar. Astronomers think that such small bars could be important in the formation of galactic bulges since they might provide a mechanism for bringing material from the outer regions down to the inner ones. However, the connection between bars and bulge formation is still not clear since bars are not a universal feature in spiral galaxies.
Lying in the constellation of Hydra, ESO 499-G37 is located about 59 million light-years away from the Sun. The galaxy belongs to the NGC 3175 group.
ESO 499-G37 was first observed in the late seventies within the ESO/Uppsala Survey of the ESO (B) atlas. This was a joint project undertaken by the European Southern Observatory (ESO) and the Uppsala Observatory, which used the ESO 1-metre Schmidt telescope at La Silla Observatory, Chile, to map a large portion of the southern sky looking for stars, galaxies, clusters, and planetary nebulae.
This picture was created from visible and infrared exposures taken with the Wide Field Channel of the Advanced Camera for Surveys. The field of view is approximately 3.4 arcminutes wide.
Luminous galaxies glow like fireflies on a dark night in this image snapped by the NASA/ESA Hubble Space Telescope. The central galaxy in this image is a gigantic elliptical galaxy designated 4C 73.08. A prominent spiral galaxy seen from "above" shines in the lower part of the image, while examples of galaxies viewed edge-on also populate the cosmic landscape.
In the optical and near-infrared light captured to make this image, 4C 73.08 does not appear all that beastly. But when viewed in longer wavelengths the galaxy takes on a very different appearance. Dust-piercing radio waves reveal plumes emanating from the core, where a supermassive black hole spews out twin jets of material. 4C 73.08 is classified as a radio galaxy as a result of this characteristic activity in the radio part of the electromagnetic spectrum.
Astronomers must study objects such as 4C 73.08 in multiple wavelengths in order to learn their true natures, just as seeing a firefly’s glow would tell a scientist only so much about the insect. Observing 4C 73.08 in visible light with Hubble illuminates galactic structure as well as the ages of constituent stars, and therefore the age of the galaxy itself. 4C 73.08 is decidedly redder than the prominent, bluer spiral galaxy in this image. The elliptical galaxy’s redness comes from the presence of many older, crimson stars, which shows that 4C 73.08 is older than its spiral neighbour.
The image was taken using Hubble’s Wide Field Camera 3 through two filters: one which captures green light, and one which captures red and near-infrared light.
The NASA/ESA Hubble Space Telescope has captured a beautiful galaxy that, with its reddish and yellow central area, looks rather like an explosion from a Hollywood movie. The galaxy, called NGC 5010, is in a period of transition. The aging galaxy is moving on from life as a spiral galaxy, like our Milky Way, to an older, less defined type called an elliptical galaxy. In this in-between phase, astronomers refer to NGC 5010 as a lenticular galaxy, which has features of both spirals and ellipticals.
NGC 5010 is located around 140 million light-years away in the constellation of Virgo (The Virgin). The galaxy is oriented sideways to us, allowing Hubble to peer into it and show the dark, dusty, remnant bands of spiral arms. NGC 5010 has notably started to develop a big bulge in its disc as it takes on a more rounded shape.
Most of the stars in NGC 5010 are red and elderly. The galaxy no longer contains all that many of the fast-lived blue stars common in younger galaxies that still actively produce new populations of stars.
Much of the dusty and gaseous fuel needed to create fresh stars has already been used up in NGC 5010. Overt time, the galaxy will grow progressively more "red and dead”, as astronomers describe elliptical galaxies.
Hubble's Advanced Camera for Surveys (ACS) snapped this image in violet and infrared light.
The NASA/ESA Hubble Space Telescope offers an impressive view of the centre of globular cluster NGC 6362. The image of this spherical collection of stars takes a deeper look at the core of the globular cluster, which contains a high concentration of stars with different colours.
Tightly bound by gravity, globular clusters are composed of old stars, which, at around 10 billion years old, are much older than the Sun. These clusters are fairly common, with more than 150 currently known in our galaxy, the Milky Way, and more which have been spotted in other galaxies.
Globular clusters are among the oldest structures in the Universe that are accessible to direct observational investigation, making them living fossils from the early years of the cosmos.
Astronomers infer important properties of globular clusters by looking at the light from their constituent stars. For many years, they were regarded as ideal laboratories for testing the standard stellar evolution theory. Among other things, this theory suggests that most of the stars within a globular cluster should be of a similar age.
Recently, however, high precision measurements performed in numerous globular clusters, primarily with the Hubble Space Telescope, has led some to question this widely accepted theory. In particular, certain stars appear younger and bluer than their companions, and they have been dubbed blue stragglers. NGC 6362 contains many of these stars.
Since they are usually found in the core regions of clusters, where the concentration of stars is large, the most likely explanation for this unexpected population of objects seems to be that they could be either the result of stellar collisions or transfer of material between stars in binary systems. This influx of new material would heat up the star and make it appear younger than its neighbours.
NGC 6362 is located about 25 000 light-years from Earth in the constellation of Ara (The Altar). British astronomer James Dunlop first observed this globular cluster on 30 June 1826.
This image was created combining ultraviolet, visual and infrared images taken with the Wide Field Channel of the Advanced Camera for Surveys and the Wide Field Camera 3. An image image of NGC 6362 taken by the MPG/ESO 2.2-metre telescope will be published by the European Southern Observatory on Wednesday. See it on www.eso.org from 12:00 on 31 October.
The NASA/ESA Hubble Space Telescope has imaged the faint irregular galaxy NGC 3738, a starburst galaxy. The galaxy is in the midst of a violent episode of star formation, during which it is converting reservoirs of hydrogen gas harboured in the galaxy’s centre into stars. Hubble spots this gas glowing red around NGC 3738, one of the most distinctive signs of ongoing star formation.
Lying in the constellation of Ursa Major (The Great Bear), NGC 3738 is located about 12 million light-years from the Sun, and belongs to the Messier 81 group of galaxies. This galaxy — first observed by astronomer William Herschel back in 1789 — is a nearby example of a blue compact dwarf, the faintest type of starburst galaxy. Blue compact dwarfs are small compared to large spiral galaxies — NGC 3738 is around 10 000 light-years across, just one tenth of the size of the Milky Way.
This type of galaxy is blue in appearance by virtue of containing large clusters of hot, massive stars, which ionise the surrounding interstellar gas with their intense ultraviolet radiation. They are relatively faint and appear to be irregular in shape. Unlike spirals or elliptical galaxies, irregular galaxies do not have any distinctive features, such as a nuclear bulge or spiral arms. Rather, they are extremely chaotic in appearance. These galaxies are thought to resemble some of the earliest that formed in the Universe and may provide clues as to how stars appeared shortly after the Big Bang.
This image was created by combining visual and infrared images taken with the Wide Field Channel of the Advanced Camera for Surveys aboard the Hubble Space Telescope. The field of view of the Wide Field Channel is approximately 3.4 by 3.4 arcminutes wide.
NGC 3344 is a glorious spiral galaxy around half the size of the Milky Way, which lies 25 million light-years distant. We are fortunate enough to see NGC 3344 face-on, allowing us to study its structure in detail.
The galaxy features an outer ring swirling around an inner ring with a subtle bar structure in the centre. The central regions of the galaxy are predominately populated by young stars, with the galactic fringes also featuring areas of active star formation.
Central bars are found in around two thirds of spiral galaxies. NGC 3344’s is clearly visible here, although it is not as dramatic as some (see for example heic1202).
The high density of stars in galaxies’ central regions gives them enough gravitational influence to affect the movement of other stars in their galaxy. However, NGC 3344’s outer stars are moving in an unusual manner, although the presence of the bar cannot entirely account for this, leaving astronomers puzzled. It is possible that in its past NGC 3344 passed close by another galaxy and accreted stars from it, but more research is needed to state this with confidence.
The image is a combination of exposures taken in visible and near-infrared light, using Hubble’s Advanced Camera for Surveys. The field of view is around 3.4 by 3.4 arcminutes, or around a tenth of the diameter of the full Moon.
The Universe is filled with mysterious objects. Many of them are as strange as they are beautiful. Among these, planetary nebulae are probably one of the most fascinating objects to behold in the night sky. No other type of object has such a large variety of shapes and structures. The NASA/ESA Hubble Space Telescope provides us this week with a striking image of Hen 3-1475, a planetary nebula in the making.
Planetary nebulae — the name arises because most of these objects resembled a planet when they were first discovered through early telescopes — are expanding, glowing shells of gas coming from Sun-like stars at the ends of their lives. They glow brightly because of the radiation that comes from a hot, compact core, which remains after the outer envelope is ejected, and is powerful enough to make these gossamer shells shine.
Each planetary nebula is complex and unique. Hen 3-1475 is a great example of a planetary nebula in the making, a phase which is known to astronomers as a protoplanetary or preplanetary nebula.
Since the central star has not yet blown away its complete shell, the star is not hot enough to ionise the shell of gas and so the nebula does not shine. Rather, we see the expelled gas thanks to light reflected off it. When the star’s envelope is fully ejected, it will begin to glow and become a planetary nebula.
Hen 3-1475 is located in the constellation of Sagittarius around 18 000 light-years away from us. The central star is more than 12 000 times as luminous as our Sun. Its most characteristic feature is a thick torus of dust around the central star and two S-shaped jets that are emerging from the pole regions of the central star. These jets are long outflows of fast-moving gas travelling at hundreds of kilometres per second.
The formation of these bipolar jets has puzzled astronomers for a long time. How can a spherical star form these complex structures? Recent studies suggest that the object’s characteristic shape and the large velocity outflow is created by a central source that ejects streams of gas in opposite directions and precesses once every thousand years. It is like an enormous, slowly rotating garden sprinkler in the middle of the sky. No wonder astronomers also have nicknamed this object the “Garden-sprinkler Nebula”.
This picture was taken with Hubble’s Wide Field Camera 3, which provides significantly higher resolution than previous observations made with the Wide Field and Planetary Camera 2 (heic0308).
- Hubblecast 52: The Death of Stars explains how Sun-like stars end their lives as planetary nebulae
This dazzling image shows the globular cluster Messier 69, or M 69 for short, as viewed through the NASA/ESA Hubble Space Telescope. Globular clusters are dense collections of old stars. In this picture, foreground stars look big and golden when set against the backdrop of the thousands of white, silvery stars that make up M 69.
Another aspect of M 69 lends itself to the bejewelled metaphor: As globular clusters go, M 69 is one of the most metal-rich on record. In astronomy, the term “metal” has a specialised meaning: it refers to any element heavier than the two most common elements in our Universe, hydrogen and helium. The nuclear fusion that powers stars created all of the metallic elements in nature, from the calcium in our bones to the carbon in diamonds. Successive generations of stars have built up the metallic abundances we see today.
Because the stars in globular clusters are ancient, their metallic abundances are much lower than more recently formed stars, such as the Sun. Studying the makeup of stars in globular clusters like M 69 has helped astronomers trace back the evolution of the cosmos.
M 69 is located 29 700 light-years away in the constellation Sagittarius (the Archer). The famed French comet hunter Charles Messier added M 69 to his catalogue in 1780. It is also known as NGC 6637.
The image is a combination of exposures taken in visible and near-infrared light by Hubble’s Advanced Camera for Surveys, and covers a field of view of approximately 3.4 by 3.4 arcminutes.
The NASA/ESA Hubble Space Telescope has provided us with another outstanding image of a nearby galaxy. This week, we highlight the galaxy NGC 4183, seen here with a beautiful backdrop of distant galaxies and nearby stars. Located about 55 million light-years from the Sun and spanning about eighty thousand light-years, NGC 4183 is a little smaller than the Milky Way. This galaxy, which belongs to the Ursa Major Group, lies in the northern constellation of Canes Venatici (The Hunting Dogs).
NGC 4183 is a spiral galaxy with a faint core and an open spiral structure. Unfortunately, this galaxy is viewed edge-on from the Earth, and we cannot fully appreciate its spiral arms. But we can admire its galactic disc.
The discs of galaxies are mainly composed of gas, dust and stars. There is evidence of dust over the galactic plane, visible as dark intricate filaments that block the visible light from the core of the galaxy. In addition, recent studies suggest that this galaxy may have a bar structure. Galactic bars are thought to act as a mechanism that channels gas from the spiral arms to the centre, enhancing star formation, which is typically more pronounced in the spiral arms than in the bulge of the galaxy.
British astronomer William Herschel first observed NGC 4183 on 14 January 1778.
This picture was created from visible and infrared images taken with the Wide Field Channel of the Advanced Camera for Surveys. The field of view is approximately 3.4 arcminutes wide.
This image uses data identified by Luca Limatola in the Hubble's Hidden Treasures image processing competition.
The NASA/ESA Hubble Space Telescope has produced a sharp image of NGC 4634, a spiral galaxy seen exactly side-on. Its disc is slightly warped by ongoing interactions with a nearby galaxy, and it is crisscrossed by clearly defined dust lanes and bright nebulae.
NGC 4634, which lies around 70 million light-years from Earth in the constellation of Coma Berenices, is one of a pair of interacting galaxies. Its neighbour, NGC 4633, lies just outside the upper right corner of the frame, and is visible in wide-field views of the galaxy. While it may be out of sight, it is not out of mind: its subtle effects on NGC 4634 are easy to see to a well-trained eye.
Gravitational interactions pull the neat spiral forms of galaxies out of shape as they get closer to each other, and the disruption to gas clouds triggers vigorous episodes of star formation. While this galaxy’s spiral pattern is not directly visible thanks to our side-on perspective, its disc is slightly warped, and there is clear evidence of star formation.
Along the full length of the galaxy, and scattered around parts of its halo, are bright pink nebulae. Similar to the Orion Nebula in the Milky Way, these are clouds of gas that are gradually coalescing into stars. The powerful radiation from the stars excites the gas and makes it light up, much like a fluorescent sign. The large number of these star formation regions is a telltale sign of gravitational interaction.
The dark filamentary structures that are scattered along the length of the galaxy are caused by cold interstellar dust blocking some of the starlight.
Hubble’s image is a combination of exposures in visible light produced by Hubble’s Advanced Camera for Surveys and the Wide Field and Planetary Camera 2.
This image portrays a beautiful view of the galaxy NGC 7090, as seen by the NASA/ESA Hubble Space Telescope. The galaxy is viewed edge-on from the Earth, meaning we cannot easily see the spiral arms, which are full of young, hot stars.
However, our side-on view shows the galaxy’s disc and the bulging central core, where typically a large group of cool old stars are packed in a compact, spheroidal region. In addition, there are two interesting features present in the image that are worth mentioning.
First, we are able to distinguish an intricate pattern of pinkish red regions over the whole galaxy. This indicates the presence of clouds of hydrogen gas. These structures trace the location of ongoing star formation, visual confirmation of recent studies that classify NGC 7090 as an actively star-forming galaxy.
Second, we observe dust lanes, depicted as dark regions inside the disc of the galaxy. In NGC 7090, these regions are mostly located in lower half of the galaxy, showing an intricate filamentary structure. Looking from the outside in through the whole disc, the light emitted from the bright centre of the galaxy is absorbed by the dust, silhouetting the dusty regions against the bright light in the background.
Dust in our galaxy, the Milky Way, has been one of the worst enemies of observational astronomers for decades. But this does not mean that these regions are quite blind spots in the sky. At near-infrared wavelengths — slightly longer wavelengths than visible light — this dust is largely transparent and astronomers are able to study what is really behind it. At still longer wavelengths, the realm of radio astronomy, the dust itself can actually be observed, letting astronomers study the structure and properties of dust clouds and their relationship with star formation.
Lying in the southern constellation of Indus (The Indian), NGC 7090 is located about thirty million light-years from the Sun. Astronomer John Herschel first observed this galaxy on 4 October, 1834.
The image was taken using the Wide Field Channel of the Advanced Camera for Surveys aboard the Hubble Space Telescope and combines orange light (coloured blue here), infrared (coloured red) and emissions from glowing hydrogen gas (also in red).
A version of this image of NGC 7090 was entered into the Hubble’s Hidden Treasures Image Processing Competition by contestant Rasid Tugral. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public. The competition is now closed and the list of winners is available here.
This sparkling picture taken by the NASA/ESA Hubble Space Telescope shows the centre of globular cluster M 4. The power of Hubble has resolved the cluster into a multitude of glowing orbs, each a colossal nuclear furnace.
M 4 is relatively close to us, lying 7200 light-years distant, making it a prime object for study. It contains several tens of thousand stars and is noteworthy in being home to many white dwarfs — the cores of ancient, dying stars whose outer layers have drifted away into space.
In July 2003, Hubble helped make the astounding discovery of a planet called PSR B1620-26 b, 2.5 times the mass of Jupiter, which is located in this cluster. Its age is estimated to be around 13 billion years — almost three times as old as the Solar System! It is also unusual in that it orbits a binary system of a white dwarf and a pulsar (a type of neutron star).
Amateur stargazers may like to track M 4 down in the night sky. Use binoculars or a small telescope to scan the skies near the orange-red star Antares in Scorpius. M 4 is bright for a globular cluster, but it won’t look anything like Hubble’s detailed image: it will appear as a fuzzy ball of light in your eyepiece.
On Wednesday 5 September, the European Southern Observatory (ESO) will publish a wide-field image of M 4, showing the full spheroidal shape of the globular cluster. See it at www.eso.org on Wednesday.
A new image from the NASA/ESA Hubble Space Telescope shows NGC 5806, a spiral galaxy in the constellation Virgo (the Virgin). It lies around 80 million light years from Earth. Also visible in this image is a supernova explosion called SN 2004dg.
The exposures that are combined into this image were carried out in early 2005 in order to help pinpoint the location of the supernova, which exploded in 2004. The afterglow from this outburst of light, caused by a giant star exploding at the end of its life, can be seen as a faint yellowish dot near the bottom of the galaxy.
NGC 5806 was chosen to be one of a number of galaxies in a study into supernovae because Hubble’s archive already contained high resolution imagery of the galaxy, collected before the star had exploded. Since supernovae are both relatively rare, and impossible to predict with any accuracy, the existence of such before-and-after images is precious for astronomers who study these violent events.
Aside from the supernova, NGC 5806 is a relatively unremarkable galaxy: it is neither particularly large or small, nor especially close or distant.
The galaxy’s bulge (the densest part in the centre of the spiral arms) is a so-called disk-type bulge, in which the spiral structure extends right to the centre of the galaxy, instead of there being a large elliptical bulge of stars present. It is also home to an active galaxy nucleus, a supermassive black hole which is pulling in large amounts of matter from its immediate surroundings. As the matter spirals around the black hole, it heats up and emits powerful radiation.
This image is produced from three exposures in visible and infrared light, observed by Hubble’s Advanced Camera for Surveys. The field of view is approximately 3.3 by 1.7 arcminutes.
A version of this image was entered into the Hubble’s Hidden Treasures Image Processing Competition by contestant Andre van der Hoeven (who won second prize in the competition for his image of Messier 77). Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public. The competition has now closed.
The NASA/ESA Hubble Space Telescope has produced this beautiful image of the globular cluster Messier 56 (also known as M 56 or NGC 6779), which is located about 33 000 light years away from the Earth in the constellation of Lyra (The Lyre). The cluster is composed of a large number of stars, tightly bound to each other by gravity.
However, this was not known when Charles Messier first observed it in January 1779. He described Messier 56 as “a nebula without stars”, like most globular clusters that he discovered — his telescope was not powerful enough to individually resolve any of the stars visible here, making it look like a fuzzy ball through his telescope’s eyepiece. We clearly see from Hubble’s image how the development of technology over the years has helped our understanding of astronomical objects.
Astronomers typically infer important properties of globular clusters by looking at the light of their constituent stars. But they have to be very careful when they observe objects like Messier 56, which is located close to the Galactic plane. This region is crowded by “field-stars”, in other words, stars in the Milky Way that happen to lie in the same direction but do not belong to the cluster. These objects can contaminate the light, and hence undermine the conclusions reached by astronomers.
A tool often used by scientists for studying stellar clusters is the colour-magnitude (or Hertzsprung-Russell) diagram. This chart compares the brightness and colour of stars – which in turn, tells scientists what the surface temperature of a star is.
By comparing high quality observations taken with the Hubble Space Telescope with results from the standard theory of stellar evolution, astronomers can characterise the properties of a cluster. In the case of Messier 56, this includes its age, which at 13 billion years is approximately three times the age of the Sun. Furthermore, they have also been able to study the chemical composition of Messier 56. The cluster has relatively few elements heavier than hydrogen and helium, typically a sign of stars that were born early in the Universe’s history, before many of the elements in existence today were formed in significant quantities.
Astronomers have found that the majority of clusters with this type of chemical makeup lie along a plane in the Milky Way’s halo. This suggests that such clusters were captured from a satellite galaxy, rather than being the oldest members of the Milky Way's globular cluster system as had been previously thought.
This image consists of visible and near-infrared exposures from Hubble’s Advanced Camera for Surveys. The field of view is approximately 3.3 by 3.3 arcminutes.
A version of this image was entered into the Hubble’s Hidden Treasures Image Processing Competition by contestant Gilles Chapdelaine. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public. The competition has now closed and the results will be published soon.
In terms of intergalactic real estate, our Solar System has a plumb location as part of a big, spiral galaxy, the Milky Way. Numerous, less glamorous dwarf galaxies, keep the Milky Way company. Many galaxies, however, are comparatively isolated, without close neighbours. One such example is the small galaxy known as DDO 190, snapped here in a new image from the NASA/ESA Hubble Space Telescope.
DDO 190 is classified as a dwarf irregular galaxy as it is relatively small and lacks clear structure. Older, reddish stars mostly populate DDO 190’s outskirts, while some younger, bluish stars gleam in DDO 190’s more crowded interior. Some pockets of ionised gas heated up by stars appear here and there, with the most noticeable one shining towards the bottom of DDO 190 in this picture. Meanwhile, a great number of distant galaxies with evident spiral, elliptical and less-defined shapes glow in the background.
DDO 190 lies around nine million light-years away from our Solar System. It is considered part of the loosely associated Messier 94 group of galaxies, not far from the Local Group of galaxies that includes the Milky Way. Canadian astronomer Sidney van der Bergh was the first to record DDO 190 in 1959 as part of the DDO catalogue of dwarf galaxies. (“DDO” stands for the David Dunlap Observatory, now managed by the Royal Astronomical Society of Canada, where the catalogue was created).
Although within the Messier 94 group, DDO 190 is on its own. The galaxy’s nearest dwarf galaxy neighbour, DDO 187, is thought to be no closer than three million light-years away. In contrast, many of the Milky Way’s companion galaxies, such as the Large and Small Magellanic Clouds, reside within a fifth or so of that distance, and even the giant spiral of the Andromeda Galaxy is closer to the Milky Way than DDO 190 is to its nearest neighbour.
Hubble’s Advanced Camera for Surveys captured this image in visible and infrared light. The field of view is around 3.3 by 3.3 arcminutes
A version of this image was entered into the Hubble’s Hidden Treasures Image Processing Competition by contestant Claude Cornen. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public. The competition has now closed and the results will be published soon.
Turning its 2.4-metre eye to the Tarantula Nebula, the NASA/ESA Hubble Space Telescope has taken this close-up of the outskirts of the main cloud of the Nebula.
The bright wispy structures are the signature of an environment rich in ionised hydrogen gas, called H II by astronomers. In reality these appear red, but the choice of filters and colours of this image, which includes exposures both in visible and infrared light, make the gas appear green.
These regions contain recently formed stars, which emit powerful ultraviolet radiation that ionises the gas around them. These clouds are ephemeral as eventually the stellar winds from the newborn stars and the ionisation process will blow away the clouds, leaving stellar clusters like the Pleiades.
Located in the Large Magellanic Cloud, one of our neighbouring galaxies, and situated at a distance of 170 000 light-years away from Earth, the Tarantula Nebula is the brightest known nebula in the Local Group of galaxies. It is also the largest (around 650 light-years across) and most active star-forming region known in our group of galaxies, containing numerous clouds of dust and gas and two bright star clusters. A recent Hubble image shows a large part of the nebula immediately adjacent to this field of view.
The cluster at the Tarantula nebula’s centre is relatively young and very bright. While it is outside the field of view of this image, the energy from it is responsible for most of the brightness of the Nebula, including the part we see here. The nebula is in fact so luminous that if it were located within 1000 light-years from Earth, it would cast shadows on our planet.
The Tarantula Nebula was host to the closest supernova ever detected since the invention of the telescope, supernova 1987A, which was visible to the naked eye.
The image was produced by Hubble’s Advanced Camera for Surveys, and has a field of view of approximately 3.3 by 3.3 arcminutes.
A version of this image was entered into the Hubble’s Hidden Treasures Image Processing Competition by contestant Judy Schmidt. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public. The competition has now closed and the results will be published soon.
The NASA/ESA Hubble Space Telescope offers this delightful view of the crowded stellar encampment called Messier 68, a spherical, star-filled region of space known as a globular cluster. Mutual gravitational attraction amongst a cluster’s hundreds of thousands or even millions of stars keeps stellar members in check, allowing globular clusters to hang together for many billions of years.
Astronomers can measure the ages of globular clusters by looking at the light of their constituent stars. The chemical elements leave signatures in this light, and the starlight reveals that globular clusters' stars typically contain fewer heavy elements, such as carbon, oxygen and iron, than stars like the Sun. Since successive generations of stars gradually create these elements through nuclear fusion, stars having fewer of them are relics of earlier epochs in the Universe. Indeed, the stars in globular clusters rank among the oldest on record, dating back more than 10 billion years.
More than 150 of these objects surround our Milky Way galaxy. On a galactic scale, globular clusters are indeed not all that big. In Messier 68's case, its constituent stars span a volume of space with a diameter of little more than a hundred light-years. The disc of the Milky Way, on the other hand, extends over some 100 000 light-years or more.
Messier 68 is located about 33 000 light-years from Earth in the constellation Hydra (The Female Water Snake). French astronomer Charles Messier notched the object as the sixty-eighth entry in his famous catalogue in 1780.
Hubble added Messier 68 to its own impressive list of cosmic targets in this image using the Wide Field Camera of Hubble’s Advanced Camera for Surveys. The image, which combines visible and infrared light, has a field of view of approximately 3.4 by 3.4 arcminutes.
The galaxy NGC 4700 bears the signs of the vigorous birth of many new stars in this image captured by the NASA/ESA Hubble Space Telescope.
The many bright, pinkish clouds in NGC 4700 are known as H II regions, where intense ultraviolet light from hot young stars is causing nearby hydrogen gas to glow. H II regions often come part-and-parcel with the vast molecular clouds that spawn fresh stars, thus giving rise to the locally ionised gas.
In 1610, French astronomer Nicolas-Claude Fabri de Peiresc peered through a telescope and found what turned out to be the first H II region on record: the Orion Nebula, located relatively close to our Solar System here in the Milky Way. Astronomers study these regions throughout the Milky Way and those easily seen in other galaxies to gauge the chemical makeup of cosmic environments and their influence on the formation of stars.
NGC 4700 was discovered back in March 1786 by the British astronomer William Herschel who noted it as a “very faint nebula”. NGC 4700, along with many other relatively close galaxies, is found in the constellation of Virgo (The Virgin) and is classified as a barred spiral galaxy, similar in structure to the Milky Way. It lies about 50 million light-years from us and is moving away from us at about 1400 km/second due to the expansion of the Universe.
The NASA/ESA Hubble Space Telescope has captured a crowd of stars that looks rather like a stadium darkened before a show, lit only by the flashbulbs of the audience’s cameras. Yet the many stars of this object, known as Messier 107, are not a fleeting phenomenon, at least by human reckoning of time — these ancient stars have gleamed for many billions of years.
Messier 107 is one of more than 150 globular star clusters found around the disc of the Milky Way galaxy. These spherical collections each contain hundreds of thousands of extremely old stars and are among the oldest objects in the Milky Way. The origin of globular clusters and their impact on galactic evolution remains somewhat unclear, so astronomers continue to study them through pictures such as this one obtained by Hubble.
As globular clusters go, Messier 107 is not particularly dense. Visually comparing its appearance to other globular clusters, such as Messier 53 or Messier 54 reveals that the stars within Messier 107 are not packed as tightly, thereby making its members more distinct like individual fans in a stadium's stands.
Messier 107 can be found in the constellation of Ophiuchus (The Serpent Bearer) and is located about 20 000 light-years from the Solar System.
French astronomer Pierre Méchain first noted the object in 1782, and British astronomer William Herschel documented it independently a year later. A Canadian astronomer, Helen Sawyer Hogg, added Messier 107 to Charles Messier's famous astronomical catalogue in 1947.
This picture was obtained with the Wide Field Camera of Hubble’s Advanced Camera for Surveys. The field of view is approximately 3.4 by 3.4 arcminutes.
This image snapped by the NASA/ESA Hubble Space Telescope reveals an exquisitely detailed view of part of the disc of the spiral galaxy NGC 4565. This bright galaxy is one of the most famous examples of an edge-on spiral galaxy, oriented perpendicularly to our line of sight so that we see right into its luminous disc. NGC 4565 has been nicknamed the Needle Galaxy because, when seen in full, it appears as a very narrow streak of light on the sky.
The edgewise view into the Needle Galaxy shown here looks very similar to the view we have from our Solar System into the core of the Milky Way. In both cases ribbons of dust block some of the light coming from the galactic disc. To the lower right, the dust stands in even starker contrast against the copious yellow light from the star-filled central regions. NGC 4565’s core is off camera to the lower right. For a full view of NGC 4565 for comparison’s sake, see this wider field of view from ESO’s Very Large Telescope.
Studying galaxies like NGC 4565 helps astronomers learn more about our home, the Milky Way. At a distance of only about 40 million light-years, NGC 4565 is relatively close by, and being seen edge-on makes it a particularly useful object for comparative study. As spiral galaxies go, NGC 4565 is a whopper — about a third as big again as the Milky Way.
The image was taken with Hubble’s Advanced Camera for Surveys and has a field of view of approximately 3.4 by 3.4 arcminutes.
A version of this image was entered into the Hubble’s Hidden Treasures Image Processing Competition by contestant Josh Barrington. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public. The competition has now closed and the results will be published soon.
A bright star is surrounded by a tenuous shell of gas in this unusual image from the NASA/ESA Hubble Space Telescope. U Camelopardalis, or U Cam for short, is a star nearing the end of its life. As it begins to run low on fuel, it is becoming unstable. Every few thousand years, it coughs out a nearly spherical shell of gas as a layer of helium around its core begins to fuse. The gas ejected in the star’s latest eruption is clearly visible in this picture as a faint bubble of gas surrounding the star.
U Cam is an example of a carbon star. This is a rare type of star whose atmosphere contains more carbon than oxygen. Due to its low surface gravity, typically as much as half of the total mass of a carbon star may be lost by way of powerful stellar winds.
Located in the constellation of Camelopardalis (The Giraffe), near the North Celestial Pole, U Cam itself is actually much smaller than it appears in Hubble’s picture. In fact, the star would easily fit within a single pixel at the centre of the image. Its brightness, however, is enough to overwhelm the capability of Hubble’s Advanced Camera for Surveys making the star look much bigger than it really is.
The shell of gas, which is both much larger and much fainter than its parent star, is visible in intricate detail in Hubble’s portrait. While phenomena that occur at the ends of stars’ lives are often quite irregular and unstable (see for example Hubble’s images of Eta Carinae, potw1208a), the shell of gas expelled from U Cam is almost perfectly spherical.
The image was produced with the High Resolution Channel of the Advanced Camera for Surveys.
Relatively few galaxies possess the sweeping, luminous spiral arms or brightly glowing centre of our home galaxy the Milky Way. In fact, most of the Universe's galaxies look like small, amorphous clouds of vapour. One of these galaxies is DDO 82, captured here in an image from the NASA/ESA Hubble Space Telescope. Though tiny compared to the Milky Way, such dwarf galaxies still contain between a few million and a few billion stars.
DDO 82, also known by the designation UGC 5692, is not without a hint of structure, however. Astronomers classify it as an Sm galaxy, or Magellanic spiral galaxy, named after the Large Magellanic Cloud, a dwarf galaxy that orbits the Milky Way. That galaxy, like DDO 82, is said to have one spiral arm.
In the case of DDO 82, gravitational interactions over its history seem to have discombobulated it so that this structure is not as evident as in the Large Magellanic Cloud. Accordingly, astronomers also refer to DDO 82 and others of a similar unshapely nature as dwarf irregular galaxies.
DDO 82 can be found in the constellation of Ursa Major (the Great Bear) approximately 13 million light-years away. The object is considered part of the M81 Group of around three dozen galaxies. DDO 82 gets its name from its entry number in the David Dunlap Observatory Catalogue. Canadian astronomer Sidney van den Bergh originally compiled this list of dwarf galaxies in 1959.
The image is made up of exposures taken in visible and infrared light by Hubble’s Advanced Camera for Surveys. The field of view is approximately 3.3 by 3.3 arcminutes.
Like many of the most famous objects in the sky, globular cluster Messier 10 was of little interest to its discoverer: Charles Messier, the 18th century French astronomer, catalogued over 100 galaxies and clusters, but was primarily interested in comets. Through the telescopes available at the time, comets, nebulae, globular clusters and galaxies appeared just as faint, diffuse blobs and could easily be confused for one another.
Only by carefully observing their motion — or lack of it — were astronomers able to distinguish them: comets move slowly relative to the stars in the background, while other more distant astronomical objects do not move at all.
Messier’s decision to catalogue all the objects that he could find and that were not comets, was a pragmatic solution which would have a huge impact on astronomy. His catalogue of just over 100 objects includes many of the most famous objects in the night sky. Messier 10, seen here in an image from the NASA/ESA Hubble Space Telescope, is one of them. Messier described it in the very first edition of his catalogue, which was published in 1774 and included the first 45 objects he identified.
Messier 10 is a ball of stars that lies about 15 000 light-years from Earth, in the constellation of Ophiuchus (The Serpent Bearer). Approximately 80 light-years across, it should therefore appear about two thirds the size of the Moon in the night sky. However, its outer regions are extremely diffuse, and even the comparatively bright core is too dim to see with the naked eye.
Hubble, which has no problems seeing faint objects, has observed the brightest part of the centre of the cluster in this image, a region which is about 13 light-years across.
This image is made up of observations made in visible and infrared light using Hubble’s Advanced Camera for Surveys. The observations were carried out as part of a major Hubble survey of globular clusters in the Milky Way.
A version of this image was entered into the Hubble’s Hidden Treasures Image Processing Competition by contestant flashenthunder. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public. The competition has now closed and the results will be published soon.
The NASA/ESA Hubble Space Telescope has captured this view of the dwarf galaxy UGC 5497, which looks a bit like salt dashed on black velvet in this image.
The object is a compact blue dwarf galaxy that is infused with newly formed clusters of stars. The bright, blue stars that arise in these clusters help to give the galaxy an overall bluish appearance that lasts for several million years until these fast-burning stars explode as supernovae.
UGC 5497 is considered part of the M 81 group of galaxies, which is located about 12 million light-years away in the constellation Ursa Major (The Great Bear). UGC 5497 turned up in a ground-based telescope survey back in 2008 looking for new dwarf galaxy candidates associated with Messier 81.
According to the leading cosmological theory of galaxy formation, called Lambda Cold Dark Matter, there should be far more satellite dwarf galaxies associated with big galaxies like the Milky Way and Messier 81 than are currently known. Finding previously overlooked objects such as this one has helped cut into the expected tally — but only by a small amount.
Astrophysicists therefore remain puzzled over the so-called "missing satellite" problem.
The field of view in this image, which is a combination of visible and infrared exposures from Hubble’s Advanced Camera for Surveys, is approximately 3.4 by 3.4 arcminutes.
This image, taken by the NASA/ESA Hubble Space Telescope, shows a detailed view of the spiral arms on one side of the galaxy Messier 99. Messier 99 is a so-called grand design spiral, with long, large and clearly defined spiral arms — giving it a structure somewhat similar to the Milky Way.
Lying around 50 million light-years away, Messier 99 is one of over a thousand galaxies that make up the Virgo Cluster, the closest cluster of galaxies to us. Messier 99 itself is relatively bright and large, meaning it was one of the first galaxies to be discovered, way back in the 18th century. This earned it a place in Charles Messier’s famous catalogue of astronomical objects.
In recent years, a number of unexplained phenomena in Messier 99 have been studied by astronomers. Among these is the nature of one of the brighter stars visible in this image. Catalogued as PTF 10fqs, and visible as a yellow-orange star in the top-left corner of this image, it was first spotted by the Palomar Transient Facility, which scans the skies for sudden changes in brightness (or transient phenomena, to use astronomers’ jargon). These can be caused by different kinds of event, including variable stars and supernova explosions.
What is unusual about PTF 10fqs is that it has so far defied classification: it is brighter than a nova (a bright eruption on a star’s surface), but fainter than a supernova (the explosion that marks the end of life for a large star). Scientists have offered a number of possible explanations, including the intriguing suggestion that it could have been caused by a giant planet plunging into its parent star.
This Hubble image was made in June 2010, during the period when the outburst was fading, so PTF 10fqs’s location could be pinpointed with great precision. These measurements will allow other telescopes to home in on the star in future, even when the afterglow of the outburst has faded to nothing.
A version of this image of M 99 was entered into the Hubble’s Hidden Treasures Competition by contestant Matej Novak. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public. The competition is now closed and the winners will be announced soon.
This image from the NASA/ESA Hubble Space Telescope shows NGC 7026, a planetary nebula. Located just beyond the tip of the tail of the constellation of Cygnus (The Swan), this butterfly-shaped cloud of glowing gas and dust is the wreckage of a star similar to the Sun.
Planetary nebulae, despite their name, have nothing to do with planets. They are in fact a relatively short-lived phenomenon that occurs at the end of the life of mid-sized stars. As a star’s source of nuclear fuel runs out, its outer layers are puffed out, leaving only the hot core of the star behind. As the gaseous envelope heats up, the atoms in it are excited, and it lights up like a fluorescent sign.
Fluorescent lights on Earth get their bright colours from the gases they are filled with. Neon signs, famously, produce a bright red colour, while ultraviolet lights (black lights) typically contain mercury. The same goes for nebulae: their vivid colours are produced by the mix of gases present in them.
This image of NGC 7026 shows starlight in green, light from glowing nitrogen gas in red, and light from oxygen in blue (in reality, this appears green, but the colour in this image has been shifted to increase the contrast).
As well as visible light, NGC 7026 emits X-ray radiation, and has been studied by ESA’s XMM-Newton space telescope. X-rays are a result of the extremely high temperatures of the gas in NGC 7026.
This image was produced by the Wide Field and Planetary Camera 2 aboard the Hubble Space Telescope. The image is 35 by 35 arcseconds.
A version of this image was entered into the Hubble’s Hidden Treasures Competition by contestant Linda Morgan-O'Connor. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public.
The NASA/ESA Hubble Space Telescope captured this image of the spiral galaxy known as ESO 498-G5. One interesting feature of this galaxy is that its spiral arms wind all the way into the centre, so that ESO 498-G5's core looks like a bit like a miniature spiral galaxy. This sort of structure is in contrast to the elliptical star-filled centres (or bulges) of many other spiral galaxies, which instead appear as glowing masses, as in the case of NGC 6384.
Astronomers refer to the distinctive spiral-like bulge of galaxies such as ESO 498-G5 as disc-type bulges, or pseudobulges, while bright elliptical centres are called classical bulges. Observations from the Hubble Space Telescope, which does not have to contend with the distorting effects of Earth's atmosphere, have helped to reveal that these two different types of galactic centres exist. These observations have also shown that star formation is still going on in disc-type bulges and has ceased in classical bulges. This means that galaxies can be a bit like Russian matryoshka dolls: classical bulges look much like a miniature version of an elliptical galaxy, embedded in the centre of a spiral, while disc-type bulges look like a second, smaller spiral galaxy located at the heart of the first — a spiral within a spiral.
The similarities between types of galaxy bulge and types of galaxy go beyond their appearance. Just like giant elliptical galaxies, the classical bulges consist of great swarms of stars moving about in random orbits. Conversely, the structure and movement of stars within disc-type bulges mirror the spiral arms arrayed in a galaxy's disc. These differences suggest different origins for the two types of bulges: while classical bulges are thought to develop through major events, such as mergers with other galaxies, disc-type bulges evolve gradually, developing their spiral pattern as stars and gas migrate to the galaxy’s centre.
ESO 498-G5 is located around 100 million light-years away in the constellation of Pyxis (The Compass). This image is made up of exposures in visible and infrared light taken by Hubble’s Advanced Camera for Surveys. The field of view is approximately 3.3 by 1.6 arcminutes.
Visible in the constellation of Andromeda, NGC 891 is located approximately 30 million light-years away from Earth. The NASA/ESA Hubble Space Telescope turned its powerful wide field Advanced Camera for Surveys towards this spiral galaxy and took this close-up of its northern half. The galaxy's central bulge is just out of the image on the bottom left.
The galaxy, spanning some 100 000 light-years, is seen exactly edge-on, and reveals its thick plane of dust and interstellar gas. While initially thought to look like our own Milky Way if seen from the side, more detailed surveys revealed the existence of filaments of dust and gas escaping the plane of the galaxy into the halo over hundreds of light-years. They can be clearly seen here against the bright background of the galaxy halo, expanding into space from the disc of the galaxy.
Astronomers believe these filaments to be the result of the ejection of material due to supernovae or intense stellar formation activity. By lighting up when they are born, or exploding when they die, stars cause powerful winds that can blow dust and gas over hundreds of light-years in space.
A few foreground stars from the Milky Way shine brightly in the image, while distant elliptical galaxies can be seen in the lower right of the image.
NGC 891 is part of a small group of galaxies bound together by gravity.
A version of this image was entered into the Hubble’s Hidden Treasures Image Processing Competition by contestant Nick Rose. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public.
This mottled landscape showing the impact crater Tycho is among the most violent-looking places on our Moon. But astronomers didn’t aim the NASA/ESA Hubble Space Telescope in this direction to study Tycho itself. The image was taken in preparation for the transit of Venus across the Sun’s face on on 5-6 June 2012.
Hubble cannot look at the Sun directly, so astronomers are planning to point the telescope at Earth’s Moon and use it as a mirror to capture reflected sunlight. During the transit a small fraction of that light will have passed through Venus’s atmosphere and imprinted on that light astronomers expect to find the fingerprints of the planet’s atmospheric makeup.
These observations will mimic a technique that is already being used to sample the atmospheres of giant planets outside our Solar System passing in front of their stars. In the case of the Venus transit observations, astronomers already know the chemical makeup of Venus’s atmosphere, and that it shows no signs of life. But they can use the event to test whether their technique has a chance of detecting the very faint fingerprints of the atmosphere of an Earth-like planet around another star.
This image shows an area approximately 700 kilometres across, and reveals lunar features as small as roughly 170 metres across. The large bullseye near the top of the picture is the impact crater itself, caused by an asteroid strike about 100 million years ago. The bright trail radiating from the crater were formed by material ejected from the impact area during the asteroid collision. Tycho is about 80 kilometers wide and is circled by a rim of material rising almost 5 kilometers above the crater floor.
Because the astronomers only have one shot at observing the transit, they had to carefully plan how the study would be carried out. Part of their planning included these test observations of the Moon made on 11 January 2012.
This is the last time this century sky watchers can view Venus passing in front of the Sun, as the next transit will not happen until 2117.
The image was produced by Hubble’s Advanced Camera for Surveys. A narrow strip along the centre, and small parts of the upper left part of the image were not imaged by Hubble during its observations, and show data from lower-resolution observations made by a ground-based telescope.
This image from the NASA/ESA Hubble Space Telescope could seem like a quiet patch of sky at first glance. But zooming into the central part of a galaxy cluster — one of the largest structures of the Universe — is rather like looking at the eye of the storm.
Clusters of galaxies are large groups consisting of dozens to hundreds of galaxies, which are bound together by gravity. The galaxies sometimes stray too close to one another and the huge gravitational forces at play can distort them or even rip matter off when they collide with one another.
This particular cluster, called Abell 1185, is a chaotic one. Galaxies of various shapes and sizes are drifting dangerously close to one another. Some have already been ripped apart in this cosmic maelstrom, shedding trails of matter into the void following their close encounter. They have formed a familiar shape called The Guitar, located just outside the frame of this image.
Abell 1185 is located approximately 400 million light-years away from Earth and spans one million light-years across. A few of the elliptical galaxies that form the cluster are visible in the corners of this image, but mostly, the small elliptical shapes seen are faraway galaxies in the background, located much further away, in a quieter area of the Universe.
The NASA/ESA Hubble Space Telescope has been at the cutting edge of research into what happens to stars like our Sun at the ends of their lives (see for example Hubblecast 51). One stage that stars pass through as they run out of nuclear fuel is the preplanetary, or protoplanetary nebula. This Hubble image of the Egg Nebula shows one of the best views to date of this brief but dramatic phase in a star’s life.
The preplanetary nebula phase is a short period in the cycle of stellar evolution — over a few thousand years, the hot remains of the star in the centre of the nebula heat it up, excite the gas, and make it glow as a planetary nebula. The short lifespan of preplanetary nebulae means there are relatively few of them in existence at any one time. Moreover, they are very dim, requiring powerful telescopes to be seen. This combination of rarity and faintness means they were only discovered comparatively recently. The Egg Nebula, the first to be discovered, was first spotted less than 40 years ago, and many aspects of this class of object remain shrouded in mystery.
At the centre of this image, and hidden in a thick cloud of dust, is the nebula’s central star. While we can’t see the star directly, four searchlight beams of light coming from it shine out through the nebula. It is thought that ring-shaped holes in the thick cocoon of dust, carved by jets coming from the star, let the beams of light emerge through the otherwise opaque cloud. The precise mechanism by which stellar jets produce these holes is not known for certain, but one possible explanation is that a binary star system, rather than a single star, exists at the centre of the nebula.
The onion-like layered structure of the more diffuse cloud surrounding the central cocoon is caused by periodic bursts of material being ejected from the dying star. The bursts typically occur every few hundred years.
The distance to the Egg Nebula is only known very approximately, the best guess placing it at around 3000 light-years from Earth. This in turn means that astronomers do not have any accurate figures for the size of the nebula (it may be larger and further away, or smaller but nearer).
This image is produced from exposures in visible and infrared light from Hubble’s Wide Field Camera 3.
These bright stars shining through what looks like a haze in the night sky are part of a young stellar grouping in one of the largest known star formation regions of the Large Magellanic Cloud (LMC), a dwarf satellite galaxy of the Milky Way. The image was captured by the NASA/ESA Hubble Space Telescope’s Wide Field Planetary Camera 2.
The stellar grouping is known to stargazers as NGC 2040 or LH 88. It is essentially a very loose star cluster whose stars have a common origin and are drifting together through space. There are three different types of stellar associations defined by their stellar properties. NGC 2040 is an OB association, a grouping that usually contains 10–100 stars of type O and B — these are high-mass stars that have short but brilliant lives. It is thought that most of the stars in the Milky Way were born in OB associations.
There are several such groupings of stars in the LMC, including one previously featured as a Hubble Picture of the Week. Just like the others, LH 88 consists of several high-mass young stars in a large nebula of partially ionised hydrogen gas, and lies in what is known to be a supergiant shell of gas called LMC 4.
Over a period of several million years, thousands of stars may form in these supergiant shells, which are the largest interstellar structures in galaxies. The shells themselves are believed to have been created by strong stellar winds and clustered supernova explosions of massive stars that blow away surrounding dust and gas, and in turn trigger further episodes of star formation.
The LMC is the third closest galaxy to our Milky Way. It is located some 160 000 light-years away, and is about 100 times smaller than our own.
This image, which shows ultraviolet, visible and infrared light, covers a field of view of approximately 1.8 by 1.8 arcminutes.
A version of this image was entered into the Hubble’s Hidden Treasures Image Processing Competition by contestant Eedresha Sturdivant. Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public.
In this image, the NASA/ESA Hubble Space Telescope has captured the brilliance of the compact centre of Messier 70, a globular cluster. Quarters are always tight in globular clusters, where the mutual hold of gravity binds together hundreds of thousands of stars in a small region of space. Having this many shining stars piled on top of one another from our perspective makes globular clusters a popular target for amateur skywatchers and scientists alike. Messier 70 offers a special case because it has undergone what is known as a core collapse. In these clusters, even more stars squeeze into the object's core than on average, such that the brightness of the cluster increases steadily towards its centre.
The legions of stars in a globular cluster orbit about a shared centre of gravity. Some stars maintain relatively circular orbits, while others loop out into the cluster's fringes. As the stars interact with each other over time, lighter stars tend to pick up speed and migrate out toward the cluster's edges, while the heavier stars slow and congregate in orbits toward the centre. This huddling effect produces the denser, brighter centres characteristic of core-collapsed clusters. About a fifth of the more than 150 globular clusters in the Milky Way have undergone a core collapse.
Although many globular clusters call the galaxy's edges home, Messier 70 orbits close to the Milky Way's centre, around 30 000 light-years away from the Solar System. It is remarkable that Messier 70 has held together so well, given the strong gravitational pull of the Milky Way's hub.
Messier 70 is only about 68 light-years in diameter and can be seen, albeit very faintly, with binoculars in dark skies in the constellation of Sagittarius (The Archer). French astronomer Charles Messier documented the object in 1780 as the seventieth entry in his famous astronomical catalogue.
This picture was obtained with the Wide Field Camera of Hubble’s Advanced Camera for Surveys. The field of view is around 3.3 by 3.3 arcminutes.
This image from the NASA/ESA Hubble Space Telescope shows NGC 4980, a spiral galaxy in the southern constellation of Hydra. The shape of NGC 4980 appears slightly deformed, something which is often a sign of recent tidal interactions with another galaxy. In this galaxy’s case, however, this appears not to be the case as there are no other galaxies in its immediate vicinity.
The image was produced as part of a research program into the nature of galactic bulges, the bright, dense, elliptical centres of galaxies. Classical bulges are relatively disordered, with stars orbiting the galactic centre in all directions. In contrast, in galaxies with so-called pseudobulges, or disc-type bulges, the movement of the spiral arms is preserved right to the centre of the galaxy.
Although the spiral structure is relatively subtle in this image, scientists have shown that NGC 4980 has a disc-type bulge, and its rotating spiral structure extends to the very centre of the galaxy.
Galaxies’ bright arms are the location of new star formation in spiral galaxies, and NGC 4980 is no exception. The galaxy’s arms are traced out by blue pockets of extremely hot newborn stars are visible across much of its disc. This sets it apart from the reddish galaxies visible in the background, which are distant elliptical galaxies made up of much older, and hence redder, stars.
This image is composed of exposures taken in visible and infrared light by Hubble’s Advanced Camera for Surveys. The image is approximately 3.3 by 1.5 arcminutes in size.
The NASA/ESA Hubble Space Telescope has spotted a UFO — well, the UFO Galaxy, to be precise. NGC 2683 is a spiral galaxy seen almost edge-on, giving it the shape of a classic science fiction spaceship. This is why the astronomers at the Astronaut Memorial Planetarium and Observatory gave it this attention-grabbing nickname.
While a bird’s eye view lets us see the detailed structure of a galaxy (such as this Hubble image of a barred spiral), a side-on view has its own perks. In particular, it gives astronomers a great opportunity to see the delicate dusty lanes of the spiral arms silhouetted against the golden haze of the galaxy’s core. In addition, brilliant clusters of young blue stars shine scattered throughout the disc, mapping the galaxy’s star-forming regions.
Perhaps surprisingly, side-on views of galaxies like this one do not prevent astronomers from deducing their structures. Studies of the properties of the light coming from NGC 2683 suggest that this is a barred spiral galaxy, even though the angle we see it at does not let us see this directly.
NGC 2683, discovered on 5 February 1788 by the famous astronomer William Herschel, lies in the Northern constellation of Lynx. A constellation named not because of its resemblance to the feline animal, but because it is fairly faint, requiring the “sensitive eyes of a cat” to discern it. And when you manage to get a look at it, you’ll find treasures like this, making it well worth the effort.
This image is produced from two adjacent fields observed in visible and infrared light by Hubble’s Advanced Camera for Surveys. A narrow strip which appears slightly blurred and crosses most the image horizontally is a result of a gap between Hubble’s detectors. This strip has been patched using images from observations of the galaxy made by ground-based telescopes, which show significantly less detail.
The field of view is approximately 6.5 by 3.3 arcminutes.
Astronomers using the NASA/ESA Hubble Space Telescope have made images of several galaxies containing quasars, which act as gravitational lenses to amplify and distort images of the galaxies aligned behind them.
Quasars are among the brightest objects in the Universe, far outshining the total output of their host galaxies. They are powered by supermassive black holes, which pull in surrounding material that then heats up as it falls towards the black hole. The path that the light from even more distant galaxies takes on its journey towards us is bent by the enormous masses at the centre of these galaxies. Gravitational lensing is a subtle effect which requires extremely high resolution observations, something for which Hubble is extremely well suited.
To find these rare cases of galaxy–quasar combinations acting as lenses, a team of astronomers led by Frederic Courbin at the Ecole Polytechnique Federale de Lausanne (EPFL, Switzerland) selected 23 000 quasar spectra in the Sloan Digital Sky Survey (SDSS). They looked for the spectral imprint of galaxies at much greater distances that happened to align with foreground galaxies. Once candidates were identified, Hubble’s sharp vision was used to look for the characteristic gravitational arcs and rings that would be produced by gravitational lensing.
In Hubble’s images, the quasars are the bright spots visible at the centre of the galaxies, while the lensed images of distant galaxies are visible as fainter arc-shaped forms that surround them. From left to right, the galaxies are: SDSS J0919+2720, with two bluish lensed images clearly visible above and below the galaxy’s centre; SDSS J1005+4016, with one yellowish arc visible to the right of the galaxy’s centre; and SDSS J0827+5224, with two lensed images very faintly visible, one above and to the right, and one below and to the left of the galaxy’s centre.
Quasar host galaxies are hard or sometimes even impossible to see because the central quasar far outshines the galaxy. Therefore, it is difficult to estimate the mass of a host galaxy based on the collective brightness of its stars. However, gravitational lensing candidates are invaluable for estimating the mass of a quasar’s host galaxy because the amount of distortion in the lens can be used to estimate a galaxy’s mass.
The breathtaking butterfly-like planetary nebula NGC 6881 is visible here in an image taken by the NASA/ESA Hubble Space Telescope. Located in the constellation of Cygnus, it is formed of an inner nebula, estimated to be about one fifth of a light-year across, and symmetrical “wings” that spread out about one light-year from one tip to the other. The symmetry could be due to a binary star at the nebula’s centre.
NGC 6881 has a dying star at its core which is about 60% of the mass of the Sun. It is an example of a quadrupolar planetary nebula, made from two pairs of bipolar lobes pointing in different directions, and consisting of four pairs of flat rings. There are also three rings in the centre.
A planetary nebula is a cloud of ionised gas, emitting light of various colours. It typically forms when a dying star — a red giant — throws off its outer layers, because of pulsations and strong stellar winds.
The star’s exposed hot, luminous core starts emitting ultraviolet radiation, exciting the outer layers of the star, which then become a newly born planetary nebula. At some point, the nebula is bound to dissolve in space, leaving the central star as a white dwarf — the final evolutionary state of stars.
The name “planetary” dates back to the 18th century, when such nebulae were first discovered — and when viewed through small optical telescopes, they looked a lot like giant planets.
Planetary nebulae usually live for a few tens of thousands of years, a short phase in the lifetime of a star.
The image was taken through three filters which isolate the specific wavelength of light emitted by nitrogen (shown in red), hydrogen (shown in green) and oxygen (shown in blue).
The NASA/ESA Hubble Space Telescope has produced this beautiful image of the galaxy NGC 1483. NGC 1483 is a barred spiral galaxy located in the southern constellation of Dorado — the dolphinfish in Spanish. The nebulous galaxy features a bright central bulge and diffuse arms with distinct star-forming regions. In the background, many other distant galaxies can be seen.
The constellation Dorado is home to the Dorado Group of galaxies, a loose group comprising an estimated 70 galaxies and located some 62 million light-years away. The Dorado group is much larger than the Local Group that includes the Milky Way (and which contains around 30 galaxies) and approaches the size of a galaxy cluster. Galaxy clusters are the largest groupings of galaxies (and indeed the largest structures of any type) in the Universe to be held together by their gravity.
Barred spiral galaxies are so named because of the prominent bar-shaped structures found in their centre. They form about two thirds of all spiral galaxies, including the Milky Way. Recent studies suggest that bars may be a common stage in the formation of spiral galaxies, and may indicate that a galaxy has reached full maturity.
The myriad faint stars that comprise the Antlia Dwarf galaxy are more than four million light-years from Earth, but this NASA/ESA Hubble Space Telescope image offers such clarity that they could be mistaken for much closer stars in our own Milky Way. This very faint and sparsely populated small galaxy was only discovered in 1997.
Although small, the Antlia Dwarf is a dynamic site featuring stars at many different stages of evolution, from young to old. The freshest stars are only found in the central regions where there is significant ongoing star formation. Older stars and globular clusters are found in the outer areas.
It is not entirely clear whether the Antlia Dwarf is a member our galactic neighbourhood, called the Local Group. It probably lies just beyond the normally accepted outer limits of the group. Although it is fairly isolated, some believe it has interacted with other star groups. Evidence comes from galaxy NGC 3109, close to the Antlia Dwarf (but not visible in this image). Both galaxies feature rifts of stars moving at comparable velocities; a telltale sign that they were gravitationally linked at some point in the past.
This picture was created from observations in visible and infrared light taken with the Wide Field Channel of Hubble’s Advanced Camera for Surveys. The field of view is approximately 3.2 by 1.5 arcminutes.
At the turn of the 19th century, the binary star system Eta Carinae was faint and undistinguished. In the first decades of the century, it became brighter and brighter, until, by April 1843, it was the second brightest star in the sky, outshone only by Sirius (which is almost a thousand times closer to Earth). In the years that followed, it gradually dimmed again and by the 20th century was totally invisible to the naked eye.
The star has continued to vary in brightness ever since, and while it is once again visible to the naked eye on a dark night, it has never again come close to its peak of 1843.
The larger of the two stars in the Eta Carinae system is a huge and unstable star that is nearing the end of its life, and the event that the 19th century astronomers observed was a stellar near-death experience. Scientists call these outbursts supernova impostor events, because they appear similar to supernovae but stop just short of destroying their star.
Although 19th century astronomers did not have telescopes powerful enough to see the 1843 outburst in detail, its effects can be studied today. The huge clouds of matter thrown out a century and a half ago, known as the Homunculus Nebula, have been a regular target for Hubble since its launch in 1990. This image, taken with the Advanced Camera for Surveys High Resolution Channel is the most detailed yet, and shows how the material from the star was not thrown out in a uniform manner, but forms a huge dumbbell shape.
Eta Carinae is not only interesting because of its past, but also because of its future. It is one of the closest stars to Earth that is likely to explode in a supernova in the relatively near future (though in astronomical timescales the “near future” could still be a million years away). When it does, expect an impressive view from Earth, far brighter still than its last outburst: SN 2006gy, the brightest supernova ever observed, came from a star of the same type.
This image consists of ultraviolet and visible light images from the High Resolution Channel of Hubble’s Advanced Camera for Surveys. The field of view is approximately 30 arcseconds across.
- Previous images of Eta Carinae from the Hubble Space Telescope:
It’s well known that the Universe is changeable: even the stars that appear static and predictable every night are subject to change.
This image from the NASA/ESA Hubble Space Telescope shows planetary nebula Hen 3-1333. Planetary nebulae are nothing to do with planets — they actually represent the death throes of mid-sized stars like the Sun. As they puff out their outer layers, large, irregular globes of glowing gas expand around them, which appeared planet-like through the small telescopes that were used by their first discoverers.
The star at the heart of Hen 3-1333 is thought to have a mass of around 60% that of the Sun, but unlike the Sun, its apparent brightness varies substantially over time. Astronomers believe this variability is caused by a disc of dust which lies almost edge-on when viewed from Earth, which periodically obscures the star.
It is a Wolf–Rayet type star — a late stage in the evolution of Sun-sized stars. These are named after (and share many observational characteristics with) Wolf–Rayet stars, which are much larger. Why the similarity? Both Wolf–Rayet and Wolf–Rayet type stars are hot and bright because their helium cores are exposed: the former because of the strong stellar winds characteristic of these stars; the latter because the outer layers of the stars have been puffed away as the star runs low on fuel.
The exposed helium core, rich with heavier elements, means that the surfaces of these stars are far hotter than the Sun, typically 25 000 to 50 000 degrees Celsius (the Sun has a comparatively chilly surface temperature of just 5500 degrees Celsius).
So while they are dramatically smaller in size, the Wolf–Rayet type stars such as the one at the core of Hen 3-1333 effectively mimic the appearance of their much bigger and more energetic namesakes: they are sheep in Wolf–Rayet clothing.
This visible-light image was taken by the high resolution channel of Hubble’s Advanced Camera for Surveys. The field of view is approximately 26 by 26 arcseconds.
Many of the Universe’s galaxies are like our own, displaying beautiful spiral arms wrapping around a bright nucleus. Examples in this stunning image, taken with the Wide Field Camera 3 on the NASA/ESA Hubble Space Telescope, include the tilted galaxy at the bottom of the frame, shining behind a Milky Way star, and the small spiral at the top centre.
Other galaxies are even odder in shape. Markarian 779, the galaxy at the top of this image, has a distorted appearance because it is likely the product of a recent galactic merger between two spirals. This collision destroyed the spiral arms of the galaxies and scattered much of their gas and dust, transforming them into a single peculiar galaxy with a unique shape.
This galaxy is part of the Markarian catalogue, a database of over 1500 galaxies named after B. E. Markarian, the Armenian astronomer who studied them in the 1960s. He surveyed the sky for bright objects with unusually strong emission in the ultraviolet.
Ultraviolet radiation can come from a range of sources, so the Markarian catalogue is quite diverse. An excess of ultraviolet emissions can be the result of the nucleus of an “active” galaxy, powered by a supermassive black hole at its centre. It can also be due to events of intense star formation, called starbursts, possibly triggered by galactic collisions. Markarian galaxies are, therefore, often the subject of studies aimed at understanding active galaxies, starburst activity, and galaxy interactions and mergers.
Looking like a hoard of gems fit for an emperor’s collection, this deep sky object called NGC 6752 is in fact far more worthy of admiration. It is a globular cluster, and at over 10 billion years old is one the most ancient collections of stars known. It has been blazing for well over twice as long long as our Solar System has existed.
NGC 6752 contains a high number of “blue straggler” stars, some of which are visible in this image. These stars display characteristics of stars younger than their neighbours, despite models suggesting that most of the stars within globular clusters should have formed at approximately the same time. Their origin is therefore something of a mystery.
Studies of NGC 6752 may shed light on this situation. It appears that a very high number — up to 38% — of the stars within its core region are binary systems. Collisions between stars in this turbulent area could produce the blue stragglers that are so prevalent.
Lying 13 000 light-years distant, NGC 6752 is far beyond our reach, yet the clarity of Hubble’s images brings it tantalisingly close.
This NASA/ESA Hubble Space Telescope picture may trick you into thinking that the galaxy in it — known as UZC J224030.2+032131 — has not one but five different nuclei. In fact, the core of the galaxy is only the faint and diffuse object seen at the centre of the cross-like structure formed by the other four dots, which are images of a distant quasar located in the background of the galaxy.
The picture shows a famous cosmic mirage known as the Einstein Cross, and is a direct visual confirmation of the theory of general relativity. It is one of the best examples of the phenomenon of gravitational lensing — the bending of light by gravity as predicted by Einstein in the early 20th century. In this case, the galaxy’s powerful gravity acts as a lens that bends and amplifies the light from the quasar behind it, producing four images of the distant object.
The quasar is seen as it was around 11 billion light-years ago, in the direction of the constellation of Pegasus, while the galaxy that works as a lens is some ten times closer. The alignment between the two objects is remarkable (within 0.05 arcseconds), which is in part why such a special type of gravitational lensing is observed.
This image is likely the sharpest image of the Einstein Cross ever made, and was produced by Hubble’s Wide Field and Planetary Camera 2, and has a field of view of 26 by 26 arcseconds. | http://www.spacetelescope.org/images/potw/archive/year/2012/ | 13 |
50 | In mathematics Relations and Functions play a very important role. Relation can be represented in the roster form and tabular form for example a relation ‘R’ on Set A = 1, 2, 3, 4,5 defined by R = ( a, b) : b = a + 2 can also be expressed as :-
a R b if and only if b = a + 2,
Let ‘A’ and ‘B’ the two Sets. Then a relation ‘R’ from set ‘A’ to set ‘B’ is a subset of A x B. Thus ‘R’ is a relation from A to B ↔ R is subset A x B. If ‘R’ is a relation from a non void set ‘A’ to a non void set ‘B’ and if (a, b) ε R, then we write a R b, which is read as 'a’ is related to ‘b’ by the relation ‘R'. If (a, b) ε R, then we write ‘a ! R b’ and we can say that ‘a’ is not related to ‘b’ by the relation ‘R’.
Domain:- Let ‘R’ be a relation from a set ‘A’ to set ‘B’. Then the set of all first components or coordinates of the ordered pairs belonging to ‘r’ is called the Domain of ‘R’.
Thus, domain of R = a: (a, b) ε R
Range:- Let ‘R’ be a relation from a set ‘A’ to set ‘B’. Then the set of all second components or coordinates of the ordered pairs belonging to ‘R’ is called The Range of ‘R’.
Thus, range of R = b: (a, b) ε R,
Types of relation are:-
-Void, Universal, And Identity Relation.
-Anti Symmetric Relation.
Function :- If f : A → B is a function, then ‘f’ associates all elements of set ‘A’ to elements in set ‘B’ such that an element of set ‘A’ is associated to a unique element of set ‘B’. Let ‘A’ and ‘B’ be two non empty sets, then a function ‘f’ from set ‘A’ to set ‘B’ is a rule which groups elements of set ‘A’ to elements of set ‘B’ such that :-
(i) All elements of set ‘A’ is grouped to elements in set ‘b’.
(ii) All elements of set ‘A’ is grouped to unique elements in set ‘b’.
Some standard real Functions that are widely used when we study Calculus are:
-Greatest Integer Function.
-Smallest Integer Function.
-Fractional Part function.
-Square Root Function.
-Cube Root Function.
Addition, product, difference, quotient, multiplication of a function by a Scalar, reciprocal of a function operation are performed on above functions.
Types of function are:-
-One One Function also called Injection.
-Many One Function.
-Onto Function also called Surjection.
-One One On to Function also called Bijection.
This was all about relations and functions.
Algebra functions are one of the most important concepts of mathematics. A function can be considered as a rule which produces new elements out of some given elements. There are many terms such as 'map' to denote a function.
A relation ‘f’ from a Set ‘A’ to a set ‘B’ is said to be a function, if every element of set ‘A’ has one and only one image in set ‘B’.
In other words A...Read More
Relation Algebra is the algebra of Sets that deals with a Set of finite relations that is closed under some operators. These operators work upon one or more relations.
In mathematics a relation is a set of ordered pairs and the symbol for the set is denoted by “”. There are some examples of relations,
(0, 2), (55, 21), (3, 50),
(0, 2), (2, 5), (3, 9),
A relation is a set o...Read More
A linear can be represented by linear equation by the use of graph and y – intercept and graph equations can be drawn as per the following steps
First need to locate the y – intercepts on the graph and then need to plot a Point.
From the created point above Slope is used to find a second point and then need to plot that point also.
Now the last step is to draw the ...Read More
According to the relational model of database a table refers a convenient representation of a relation and its not necessary that the two should be equivalent like for example a SQL table can have duplicate rows but actually Relations does not allow duplication. Representation as a table implies a particular ordering to the rows and columns where as a relation is ex...Read More
There are many Algebra patterns defined in mathematics like solution of operations like addition, subtraction, multiplication and division.
Addition operation pattern: If any addition operation pattern is define like 2 * a + 6, here result of this addition pattern are depend upon value of a means when value of a changes, result is also changes –
For a = 1, result is 2 * a + 6...Read More
A coordinate plane is called the plane of two dimensions where we represent the point in the form of x- axis and the y- axis. The coordinate planes, represent the numbers ‘a’ and ‘b’, which are in the specific order where ‘a’ is at the first place and b is at the second place of the Ordered Pair (a, b). In the coordinate plane we used to represent each Point in the pla...Read More | http://www.tutorcircle.com/relations-and-functions-t3cCp.html | 13 |
75 | An analog-to-digital converter (abbreviated ADC, A/D or A to D) is a device that converts a continuous physical quantity (usually voltage) to a digital number that represents the quantity's amplitude. The conversion involves quantization of the input, so it necessarily introduces a small amount of error. The inverse operation is performed by a digital-to-analog converter (DAC). Instead of doing a single conversion, an ADC often performs the conversions ("samples" the input) periodically. The result is a sequence of digital values that have converted a continuous-time and continuous-amplitude analog signal to a discrete-time and discrete-amplitude digital signal.
An ADC may also provide an isolated measurement such as an electronic device that converts an input analog voltage or current to a digital number proportional to the magnitude of the voltage or current. However, some non-electronic or only partially electronic devices, such as rotary encoders, can also be considered ADCs.
The digital output may use different coding schemes. Typically the digital output will be a two's complement binary number that is proportional to the input, but there are other possibilities. An encoder, for example, might output a Gray code.
The resolution of the converter indicates the number of discrete values it can produce over the range of analog values. The values are usually stored electronically in binary form, so the resolution is usually expressed in bits. In consequence, the number of discrete values available, or "levels", is a power of two. For example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels, since 28 = 256. The values can represent the ranges from 0 to 255 (i.e. unsigned integer) or from −128 to 127 (i.e. signed integer), depending on the application.
Resolution can also be defined electrically, and expressed in volts. The minimum change in voltage required to guarantee a change in the output code level is called the least significant bit (LSB) voltage. The resolution Q of the ADC is equal to the LSB voltage. The voltage resolution of an ADC is equal to its overall voltage measurement range divided by the number of discrete values:
where M is the ADC's resolution in bits and EFSR is the full scale voltage range (also called 'span'). EFSR is given by
where VRefHi and VRefLow are the upper and lower extremes, respectively, of the voltages that can be coded.
Normally, the number of voltage intervals is given by
where M is the ADC's resolution in bits.
That is, one voltage interval is assigned in between two consecutive code levels.
- Coding scheme as in figure 1 (assume input signal x(t) = Acos(t), A = 5V)
- Full scale measurement range = -5 to 5 volts
- ADC resolution is 8 bits: 28 - 1 = 256 - 1 = 255 quantization levels (codes)
- ADC voltage resolution, Q = (10 V − 0 V) / 255 = 10 V / 255 ≈ 0.039 V ≈ 39 mV.
In practice, the useful resolution of a converter is limited by the best signal-to-noise ratio (SNR) that can be achieved for a digitized signal. An ADC can resolve a signal to only a certain number of bits of resolution, called the effective number of bits (ENOB). One effective bit of resolution changes the signal-to-noise ratio of the digitized signal by 6 dB, if the resolution is limited by the ADC. If a preamplifier has been used prior to A/D conversion, the noise introduced by the amplifier can be an important contributing factor towards the overall SNR.
Response type
Most ADCs are linear types. The term linear implies that the range of input values has a linear relationship with the output value.
Some early converters had a logarithmic response to directly implement A-law or μ-law coding. These encodings are now achieved by using a higher-resolution linear ADC (e.g. 12 or 16 bits) and mapping its output to the 8-bit coded values.
An ADC has several sources of errors. Quantization error and (assuming the ADC is intended to be linear) non-linearity are intrinsic to any analog-to-digital conversion. There is also a so-called aperture error which is due to a clock jitter and is revealed when digitizing a time-variant signal (not a constant value).
These errors are measured in a unit called the least significant bit (LSB). In the above example of an eight-bit ADC, an error of one LSB is 1/256 of the full signal range, or about 0.4%.
Quantization error
Quantization error (or quantization noise) is the difference between the original signal and the digitized signal. Hence, the magnitude of the quantization error at the sampling instant is between zero and half of one LSB. Quantization error is due to the finite resolution of the digital representation of the signal, and is an unavoidable imperfection in all types of ADCs.
All ADCs suffer from non-linearity errors caused by their physical imperfections, causing their output to deviate from a linear function (or some other function, in the case of a deliberately non-linear ADC) of their input. These errors can sometimes be mitigated by calibration, or prevented by testing.
Important parameters for linearity are integral non-linearity (INL) and differential non-linearity (DNL). These non-linearities reduce the dynamic range of the signals that can be digitized by the ADC, also reducing the effective resolution of the ADC.
Aperture error
|This section's factual accuracy is disputed. (August 2011)|
Imagine digitizing a sine wave . Provided that the actual sampling time uncertainty due to the clock jitter is , the error caused by this phenomenon can be estimated as .
The error is zero for DC, small at low frequencies, but significant when high frequencies have high amplitudes. This effect can be ignored if it is drowned out by the quantizing error. Jitter requirements can be calculated using the following formula: , where q is the number of ADC bits.
|1 Hz||44.1 kHz||192 kHz||1 MHz||10 MHz||100 MHz||1 GHz|
|8||1,243 µs||28.2 ns||6.48 ns||1.24 ns||124 ps||12.4 ps||1.24 ps|
|10||311 µs||7.05 ns||1.62 ns||311 ps||31.1 ps||3.11 ps||0.31 ps|
|12||77.7 µs||1.76 ns||405 ps||77.7 ps||7.77 ps||0.78 ps||0.08 ps|
|14||19.4 µs||441 ps||101 ps||19.4 ps||1.94 ps||0.19 ps||0.02 ps|
|16||4.86 µs||110 ps||25.3 ps||4.86 ps||0.49 ps||0.05 ps||–|
|18||1.21 µs||27.5 ps||6.32 ps||1.21 ps||0.12 ps||–||–|
|20||304 ns||6.88 ps||1.58 ps||0.16 ps||–||–||–|
|24||19.0 ns||0.43 ps||0.10 ps||–||–||–||–|
This table shows, for example, that it is not worth using a precise 24-bit ADC for sound recording if there is not an ultra low jitter clock. One should consider taking this phenomenon into account before choosing an ADC.
When sampling audio signals at 44.1 kHz, the anti-aliasing filter should have eliminated all frequencies above 22 kHz. The input frequency (in this case, 22 kHz), not the ADC clock frequency, is the determining factor with respect to jitter performance.
Sampling rate
The analog signal is continuous in time and it is necessary to convert this to a flow of digital values. It is therefore required to define the rate at which new digital values are sampled from the analog signal. The rate of new values is called the sampling rate or sampling frequency of the converter.
A continuously varying bandlimited signal can be sampled (that is, the signal values at intervals of time T, the sampling time, are measured and stored) and then the original signal can be exactly reproduced from the discrete-time values by an interpolation formula. The accuracy is limited by quantization error. However, this faithful reproduction is only possible if the sampling rate is higher than twice the highest frequency of the signal. This is essentially what is embodied in the Shannon-Nyquist sampling theorem.
Since a practical ADC cannot make an instantaneous conversion, the input value must necessarily be held constant during the time that the converter performs a conversion (called the conversion time). An input circuit called a sample and hold performs this task—in most cases by using a capacitor to store the analog voltage at the input, and using an electronic switch or gate to disconnect the capacitor from the input. Many ADC integrated circuits include the sample and hold subsystem internally.
All ADCs work by sampling their input at discrete intervals of time. Their output is therefore an incomplete picture of the behaviour of the input. There is no way of knowing, by looking at the output, what the input was doing between one sampling instant and the next. If the input is known to be changing slowly compared to the sampling rate, then it can be assumed that the value of the signal between two sample instants was somewhere between the two sampled values. If, however, the input signal is changing rapidly compared to the sample rate, then this assumption is not valid.
If the digital values produced by the ADC are, at some later stage in the system, converted back to analog values by a digital to analog converter or DAC, it is desirable that the output of the DAC be a faithful representation of the original signal. If the input signal is changing much faster than the sample rate, then this will not be the case, and spurious signals called aliases will be produced at the output of the DAC. The frequency of the aliased signal is the difference between the signal frequency and the sampling rate. For example, a 2 kHz sine wave being sampled at 1.5 kHz would be reconstructed as a 500 Hz sine wave. This problem is called aliasing.
To avoid aliasing, the input to an ADC must be low-pass filtered to remove frequencies above half the sampling rate. This filter is called an anti-aliasing filter, and is essential for a practical ADC system that is applied to analog signals with higher frequency content.
Although aliasing in most systems is unwanted, it should also be noted that it can be exploited to provide simultaneous down-mixing of a band-limited high frequency signal (see undersampling and frequency mixer). The alias is effectively the lower heterodyne of the signal frequency and sampling frequency.
In A-to-D converters, performance can usually be improved using dither. This is a very small amount of random noise (white noise), which is added to the input before conversion. Its effect is to cause the state of the LSB to randomly oscillate between 0 and 1 in the presence of very low levels of input, rather than sticking at a fixed value. Rather than the signal simply getting cut off altogether at this low level (which is only being quantized to a resolution of 1 bit), it extends the effective range of signals that the A-to-D converter can convert, at the expense of a slight increase in noise - effectively the quantization error is diffused across a series of noise values which is far less objectionable than a hard cutoff. The result is an accurate representation of the signal over time. A suitable filter at the output of the system can thus recover this small signal variation.
An audio signal of very low level (with respect to the bit depth of the ADC) sampled without dither sounds extremely distorted and unpleasant. Without dither the low level may cause the least significant bit to "stick" at 0 or 1. With dithering, the true level of the audio may be calculated by averaging the actual quantized sample with a series of other samples [the dither] that are recorded over time.
A virtually identical process, also called dither or dithering, is often used when quantizing photographic images to a fewer number of bits per pixel—the image becomes noisier but to the eye looks far more realistic than the quantized image, which otherwise becomes banded. This analogous process may help to visualize the effect of dither on an analogue audio signal that is converted to digital.
Dithering is also used in integrating systems such as electricity meters. Since the values are added together, the dithering produces results that are more exact than the LSB of the analog-to-digital converter.
Note that dither can only increase the resolution of a sampler, it cannot improve the linearity, and thus accuracy does not necessarily improve.
Usually, signals are sampled at the minimum rate required, for economy, with the result that the quantization noise introduced is white noise spread over the whole pass band of the converter. If a signal is sampled at a rate much higher than the Nyquist frequency and then digitally filtered to limit it to the signal bandwidth there are the following advantages:
- digital filters can have better properties (sharper rolloff, phase) than analogue filters, so a sharper anti-aliasing filter can be realised and then the signal can be downsampled giving a better result
- a 20-bit ADC can be made to act as a 24-bit ADC with 256× oversampling
- the signal-to-noise ratio due to quantization noise will be higher than if the whole available band had been used. With this technique, it is possible to obtain an effective resolution larger than that provided by the converter alone
- The improvement in SNR is 3 dB (equivalent to 0.5 bits) per octave of oversampling which is not sufficient for many applications. Therefore, oversampling is usually coupled with noise shaping (see sigma-delta modulators). With noise shaping, the improvement is 6L+3 dB per octave where L is the order of loop filter used for noise shaping. e.g. - a 2nd order loop filter will provide an improvement of 15 dB/octave.
Relative speed and precision
The speed of an ADC varies by type. The Wilkinson ADC is limited by the clock rate which is processable by current digital circuits. Currently,[when?] frequencies up to 300 MHz are possible. For a successive-approximation ADC, the conversion time scales with the logarithm of the resolution, e.g. the number of bits. Thus for high resolution, it is possible that the successive-approximation ADC is faster than the Wilkinson. However, the time consuming steps in the Wilkinson are digital, while those in the successive-approximation are analog. Since analog is inherently slower than digital, as the resolution increases, the time required also increases. Thus there are competing processes at work. Flash ADCs are certainly the fastest type of the three. The conversion is basically performed in a single parallel step. For an 8-bit unit, conversion takes place in a few tens of nanoseconds.
There is, as expected, somewhat of a tradeoff between speed and precision. Flash ADCs have drifts and uncertainties associated with the comparator levels. This results in poor linearity. For successive-approximation ADCs, poor linearity is also present, but less so than for flash ADCs. Here, non-linearity arises from accumulating errors from the subtraction processes. Wilkinson ADCs have the highest linearity of the three. These have the best differential non-linearity. The other types require channel smoothing to achieve the level of the Wilkinson.
The sliding scale principle
The sliding scale or randomizing method can be employed to greatly improve the linearity of any type of ADC, but especially flash and successive approximation types. For any ADC the mapping from input voltage to digital output value is not exactly a floor or ceiling function as it should be. Under normal conditions, a pulse of a particular amplitude is always converted to a digital value. The problem lies in that the ranges of analog values for the digitized values are not all of the same width, and the differential linearity decreases proportionally with the divergence from the average width. The sliding scale principle uses an averaging effect to overcome this phenomenon. A random, but known analog voltage is added to the sampled input voltage. It is then converted to digital form, and the equivalent digital amount is subtracted, thus restoring it to its original value. The advantage is that the conversion has taken place at a random point. The statistical distribution of the final levels is decided by a weighted average over a region of the range of the ADC. This in turn desensitizes it to the width of any specific level.
ADC types
These are the most common ways of implementing an electronic ADC:
- A direct-conversion ADC or flash ADC has a bank of comparators sampling the input signal in parallel, each firing for their decoded voltage range. The comparator bank feeds a logic circuit that generates a code for each voltage range. Direct conversion is very fast, capable of gigahertz sampling rates, but usually has only 8 bits of resolution or fewer, since the number of comparators needed, 2N - 1, doubles with each additional bit, requiring a large, expensive circuit. ADCs of this type have a large die size, a high input capacitance, high power dissipation, and are prone to produce glitches at the output (by outputting an out-of-sequence code). Scaling to newer submicrometre technologies does not help as the device mismatch is the dominant design limitation. They are often used for video, wideband communications or other fast signals in optical storage.
- A successive-approximation ADC uses a comparator to successively narrow a range that contains the input voltage. At each successive step, the converter compares the input voltage to the output of an internal digital to analog converter which might represent the midpoint of a selected voltage range. At each step in this process, the approximation is stored in a successive approximation register (SAR). For example, consider an input voltage of 6.3 V and the initial range is 0 to 16 V. For the first step, the input 6.3 V is compared to 8 V (the midpoint of the 0–16 V range). The comparator reports that the input voltage is less than 8 V, so the SAR is updated to narrow the range to 0–8 V. For the second step, the input voltage is compared to 4 V (midpoint of 0–8). The comparator reports the input voltage is above 4 V, so the SAR is updated to reflect the input voltage is in the range 4–8 V. For the third step, the input voltage is compared with 6 V (halfway between 4 V and 8 V); the comparator reports the input voltage is greater than 6 volts, and search range becomes 6–8 V. The steps are continued until the desired resolution is reached.
- A ramp-compare ADC produces a saw-tooth signal that ramps up or down then quickly returns to zero. When the ramp starts, a timer starts counting. When the ramp voltage matches the input, a comparator fires, and the timer's value is recorded. Timed ramp converters require the least number of transistors. The ramp time is sensitive to temperature because the circuit generating the ramp is often just some simple oscillator. There are two solutions: use a clocked counter driving a DAC and then use the comparator to preserve the counter's value, or calibrate the timed ramp. A special advantage of the ramp-compare system is that comparing a second signal just requires another comparator, and another register to store the voltage value. A very simple (non-linear) ramp-converter can be implemented with a microcontroller and one resistor and capacitor. Vice versa, a filled capacitor can be taken from an integrator, time-to-amplitude converter, phase detector, sample and hold circuit, or peak and hold circuit and discharged. This has the advantage that a slow comparator cannot be disturbed by fast input changes.
- The Wilkinson ADC was designed by D. H. Wilkinson in 1950. The Wilkinson ADC is based on the comparison of an input voltage with that produced by a charging capacitor. The capacitor is allowed to charge until its voltage is equal to the amplitude of the input pulse (a comparator determines when this condition has been reached). Then, the capacitor is allowed to discharge linearly, which produces a ramp voltage. At the point when the capacitor begins to discharge, a gate pulse is initiated. The gate pulse remains on until the capacitor is completely discharged. Thus the duration of the gate pulse is directly proportional to the amplitude of the input pulse. This gate pulse operates a linear gate which receives pulses from a high-frequency oscillator clock. While the gate is open, a discrete number of clock pulses pass through the linear gate and are counted by the address register. The time the linear gate is open is proportional to the amplitude of the input pulse, thus the number of clock pulses recorded in the address register is proportional also. Alternatively, the charging of the capacitor could be monitored, rather than the discharge.
- An integrating ADC (also dual-slope or multi-slope ADC) applies the unknown input voltage to the input of an integrator and allows the voltage to ramp for a fixed time period (the run-up period). Then a known reference voltage of opposite polarity is applied to the integrator and is allowed to ramp until the integrator output returns to zero (the run-down period). The input voltage is computed as a function of the reference voltage, the constant run-up time period, and the measured run-down time period. The run-down time measurement is usually made in units of the converter's clock, so longer integration times allow for higher resolutions. Likewise, the speed of the converter can be improved by sacrificing resolution. Converters of this type (or variations on the concept) are used in most digital voltmeters for their linearity and flexibility.
- A delta-encoded ADC or counter-ramp has an up-down counter that feeds a digital to analog converter (DAC). The input signal and the DAC both go to a comparator. The comparator controls the counter. The circuit uses negative feedback from the comparator to adjust the counter until the DAC's output is close enough to the input signal. The number is read from the counter. Delta converters have very wide ranges and high resolution, but the conversion time is dependent on the input signal level, though it will always have a guaranteed worst-case. Delta converters are often very good choices to read real-world signals. Most signals from physical systems do not change abruptly. Some converters combine the delta and successive approximation approaches; this works especially well when high frequencies are known to be small in magnitude.
- A pipeline ADC (also called subranging quantizer) uses two or more steps of subranging. First, a coarse conversion is done. In a second step, the difference to the input signal is determined with a digital to analog converter (DAC). This difference is then converted finer, and the results are combined in a last step. This can be considered a refinement of the successive-approximation ADC wherein the feedback reference signal consists of the interim conversion of a whole range of bits (for example, four bits) rather than just the next-most-significant bit. By combining the merits of the successive approximation and flash ADCs this type is fast, has a high resolution, and only requires a small die size.
- A sigma-delta ADC (also known as a delta-sigma ADC) oversamples the desired signal by a large factor and filters the desired signal band. Generally, a smaller number of bits than required are converted using a Flash ADC after the filter. The resulting signal, along with the error generated by the discrete levels of the Flash, is fed back and subtracted from the input to the filter. This negative feedback has the effect of noise shaping the error due to the Flash so that it does not appear in the desired signal frequencies. A digital filter (decimation filter) follows the ADC which reduces the sampling rate, filters off unwanted noise signal and increases the resolution of the output (sigma-delta modulation, also called delta-sigma modulation).
- A time-interleaved ADC uses M parallel ADCs where each ADC samples data every M:th cycle of the effective sample clock. The result is that the sample rate is increased M times compared to what each individual ADC can manage. In practice, the individual differences between the M ADCs degrade the overall performance reducing the SFDR. However, technologies exist to correct for these time-interleaving mismatch errors.
- An ADC with intermediate FM stage first uses a voltage-to-frequency converter to convert the desired signal into an oscillating signal with a frequency proportional to the voltage of the desired signal, and then uses a frequency counter to convert that frequency into a digital count proportional to the desired signal voltage. Longer integration times allow for higher resolutions. Likewise, the speed of the converter can be improved by sacrificing resolution. The two parts of the ADC may be widely separated, with the frequency signal passed through an opto-isolator or transmitted wirelessly. Some such ADCs use sine wave or square wave frequency modulation; others use pulse-frequency modulation. Such ADCs were once the most popular way to show a digital display of the status of a remote analog sensor.
There can be other ADCs that use a combination of electronics and other technologies:
- A time-stretch analog-to-digital converter (TS-ADC) digitizes a very wide bandwidth analog signal, that cannot be digitized by a conventional electronic ADC, by time-stretching the signal prior to digitization. It commonly uses a photonic preprocessor frontend to time-stretch the signal, which effectively slows the signal down in time and compresses its bandwidth. As a result, an electronic backend ADC, that would have been too slow to capture the original signal, can now capture this slowed down signal. For continuous capture of the signal, the frontend also divides the signal into multiple segments in addition to time-stretching. Each segment is individually digitized by a separate electronic ADC. Finally, a digital signal processor rearranges the samples and removes any distortions added by the frontend to yield the binary data that is the digital representation of the original analog signal.
Commercial analog-to-digital converters
Commercial ADCs are usually implemented as integrated circuits.
Most converters sample with 6 to 24 bits of resolution, and produce fewer than 1 megasample per second. Thermal noise generated by passive components such as resistors masks the measurement when higher resolution is desired. For audio applications and in room temperatures, such noise is usually a little less than 1 μV (microvolt) of white noise. If the MSB corresponds to a standard 2 V of output signal, this translates to a noise-limited performance that is less than 20~21 bits, and obviates the need for any dithering. As of February 2002, Mega- and giga-sample per second converters are available. Mega-sample converters are required in digital video cameras, video capture cards, and TV tuner cards to convert full-speed analog video to digital video files. Commercial converters usually have ±0.5 to ±1.5 LSB error in their output.
In many cases, the most expensive part of an integrated circuit is the pins, because they make the package larger, and each pin has to be connected to the integrated circuit's silicon. To save pins, it is common for slow ADCs to send their data one bit at a time over a serial interface to the computer, with the next bit coming out when a clock signal changes state, say from 0 to 5 V. This saves quite a few pins on the ADC package, and in many cases, does not make the overall design any more complex (even microprocessors which use memory-mapped I/O only need a few bits of a port to implement a serial bus to an ADC).
Commercial ADCs often have several inputs that feed the same converter, usually through an analog multiplexer. Different models of ADC may include sample and hold circuits, instrumentation amplifiers or differential inputs, where the quantity measured is the difference between two voltages.
Music recording
Analog-to-digital converters are integral to current music reproduction technology. People produce much music on computers using an analog recording and therefore need analog-to-digital converters to create the pulse-code modulation (PCM) data streams that go onto compact discs and digital music files.
The current crop of analog-to-digital converters utilized in music can sample at rates up to 192 kilohertz. High bandwidth headroom allows the use of cheaper or faster anti-aliasing filters of less severe filtering slopes. The proponents of oversampling assert that such shallower anti-aliasing filters produce less deleterious effects on sound quality, exactly because of their gentler slopes. Other experts prefer entirely filterless analog-to-digital conversion, finding aliasing less detrimental to sound perception than pre-conversion brickwall filtering. Considerable literature exists on these matters, but commercial considerations often play a significant role. Most high-profile recording studios record in 24-bit/192-176.4 kHz pulse-code modulation (PCM) or in Direct Stream Digital (DSD) formats, and then downsample or decimate the signal for Red-Book CD production (44.1 kHz) or to 48 kHz for commonly used for radio and television broadcast applications.
Digital signal processing
People must use analog-to-digital converters to processe, store, or transport virtually any analog signal in digital form. TV tuner cards, for example, use fast video analog-to-digital converters. Slow on-chip 8, 10, 12, or 16 bit analog-to-digital converters are common in microcontrollers. Digital storage oscilloscopes need very fast analog-to-digital converters, also crucial for software defined radio ando ther new applications.
Scientific instruments
Some radar systems commonly use analog-to-digital converters to convert signal strength to digital values for subsequent signal processing. Many other in situ and remote sensing systems commonly use analagous technology.
The number of binary bits in the resulting digitized numeric values reflects the resolution, the number of unique discrete levels of quantization (signal processing). The correspondence between the analog signal and the digital signal depends on the quantization error. The quantization process must occur at an adequate speed, a constraint that may limit the resolution of the digital signal.
Electrical Symbol
Testing an Analog to Digital Converter requires an analog input source, hardware to send control signals and capture digital data output. Some ADCs also require an accurate source of reference signal.
The key parameters to test a SAR ADC are following:
- DC Offset Error
- DC Gain Error
- Signal to Noise Ratio (SNR)
- Total Harmonic Distortion (THD)
- Integral Non Linearity (INL)
- Differential Non Linearity (DNL)
- Spurious Free Dynamic Range
- Power Dissipation
See also
- Audio converter
- Beta encoder
- Digital signal processing
- Differential linearity
- Integral linearity
- "Digitization of Analog Quantities". Iamechatronics.com. Retrieved 2012-06-11.
- Maxim App 800: "Design a Low-Jitter Clock for High-Speed Data Converters"
- "Jitter effects on Analog to Digital and Digital to Analog Converters". Retrieved 19 August 2012.
- abstract: "The effects of aperture jitter and clock jitter in wideband ADCs" by Michael Löhning and Gerhard Fettweis 2007
- "Understanding the effect of clock jitter on high-speed ADCs" by Derek Redmayne & Alison Steer 2008
- Knoll (1989, pp. 664–665)
- Nicholson (1974, pp. 313–315)
- Knoll (1989, pp. 665–666)
- Nicholson (1974, pp. 315–316)
- Atmel Application Note AVR400: Low Cost A/D Converter
- Knoll (1989, pp. 663–664)
- Nicholson (1974, pp. 309–310)
- Vogel, Christian (February 2005). "The Impact of Combined Channel Mismatch Effects in Time-interleaved ADCs". IEEE Transactions on Instrumentation and Measurement 55 (1): 415–427. doi:10.1109/TIM.2004.834046.
- [www.analog.com/static/imported-files/tutorials/MT-028.pdf Analog Devices MT-028 Tutorial: "Voltage-to-Frequency Converters"] by Walt Kester and James Bryant 2009, apparently adapted from "Data conversion handbook" by Walter Allan Kester 2005, page 274
- [ww1.microchip.com/downloads/en/AppNotes/00795a.pdf Microchip AN795 "Voltage to Frequency / Frequency to Voltage Converter"] page 4: "13-bit A/D converter"
- "Elements of electronic instrumentation and measurement" by Joseph J. Carr 1996, page 402
- "Voltage-to-Frequency Analog-to-Digital Converters"
- "Troubleshooting Analog Circuits" by Robert A. Pease 1991, p. 130
- Allen, Phillip E.; Holberg, Douglas R., CMOS Analog Circuit Design, ISBN 0-19-511644-5
- Kester, Walt, ed. (2005), The Data Conversion Handbook, Elsevier: Newnes, ISBN 0-7506-7841-0
- Johns, David; Martin, Ken, Analog Integrated Circuit Design, ISBN 0-471-14448-7
- Knoll, Glenn F. (1989), Radiation Detection and Measurement (2nd ed.), New York: John Wiley & Sons, pp. 665–666
- Liu, Mingliang, Demystifying Switched-Capacitor Circuits, ISBN 0-7506-7907-7
- Nicholson, P. W. (1974), Nuclear Electronics, New York: John Wiley & Sons, pp. 315–316
- Norsworthy, Steven R.; Schreier, Richard; Temes, Gabor C. (1997), Delta-Sigma Data Converters, IEEE Press, ISBN 0-7803-1045-4
- Razavi, Behzad (1995), Principles of Data Conversion System Design, New York, NY: IEEE Press, ISBN 0-7803-1093-4
- Staller, Len (February 24, 2005), "Understanding analog to digital converter specifications", Embedded Systems Design
- Walden, R. H. (1999), "Analog-to-digital converter survey and analysis", IEEE Journal on Selected Areas in Communications 17 (4): 539–550, doi:10.1109/49.761034, ISSN 0733-8716
|Wikibooks has a book on the topic of: Analog and Digital Conversion|
- Counting Type ADC A simple tutorial showing how to build your first ADC.
- An Introduction to Delta Sigma Converters A very nice overview of Delta-Sigma converter theory.
- Digital Dynamic Analysis of A/D Conversion Systems through Evaluation Software based on FFT/DFT Analysis RF Expo East, 1987
- Which ADC Architecture Is Right for Your Application? article by Walt Kester
- AN71 - The Care and Feeding of High Performance ADCs Real world circuit and layout advice
- ADC and DAC Glossary Defines commonly used technical terms.
- Introduction to ADC in AVR - Analog to digital conversion with Atmel microcontrollers
- Signal processing and system aspects of time-interleaved ADCs.
- Explanation of analog-digital converters with interactive principles of operations. | http://en.wikipedia.org/wiki/Analog_to_digital_converter | 13 |
59 | For a function f of a single real variable x, the derivative is defined as the limit of the difference quotient
provided this limit exists. In practice, the derivative is interpreted as the instantaneous rate of change of the function at x. Graphically, the derivative returns the slope of the tangent at x.
Cf. differentiation rules.
Given a set X, the derived set of X is the set of accumulation points of X. The second derived set is the derived set of the derived set, and so on.
Cf. Cantor-Bendixson Theorem.
French mathematician who is generally considered to have laid the foundations for modern mathematics. His greatest achievement was the invention of analytic geometry, in which the methods of algebra and those of geometry are used together. He is also a central figure in the history of modern philosophy; his treatises Meditations and Discourse on Method laid the groundwork both for modern rationalism and modern skepticism. Descartes did not actually use the rectangular coordinate system known as the Cartesian plane (this was developed by Leibniz and others), and he permitted only positive values for his variables. Nonetheless, his development of algebraic methods in geometry made possible an explosion of analytic discoveries by his successors, the most important of which was the discovery of the calculus less than a generation after Descartes’ death.
Those statistics used to describe a sample or population.
Cf. inferential statistics.
Geometry: A diameter of a circle (or sphere) is a line containing the center and with endpoints on the perimeter (resp. surface).
Analysis: Given a set X in a metric space, the diameter of X is the supremum of the distances between all pairs of points of X.
Graph Theory: The diameter of a given graph G is the maximum, over all pairs of vertices u, v of G that are in the same connected component of G, of the distance between u and v. In other words, it is the greatest distance between two vertices on the graph.
The difference of two numbers m and n, with n > m, is the number which when added to m yields n. For example, the difference of 3 and 5 is 2.
Set Theory: The difference of two sets A and B, denoted either as AB or as A - B, is the set of elements of A that are not in B.
A function is differentiable at a point of its domain if its derivative exists at that point. A function is said to be (simply) differentiable if its derivative exists at all points of its domain.
A rule permitting easy differentiation of functions having certain forms. See the article for a complete description.
A polynomial equation with integer coefficients. (Named after the 3rd century Greek mathematician Diophantus of Alexandria.)
Cf. Hilbert's Problems (the tenth problem), Fermat's Last Theorem.
A graph whose edges are directed, i.e. have distinguished ends. One end of every directed edge is called the head and the other is called the tail, and the edge is said to be from the tail to the head. In pictorial representations of graphs, directed edges are drawn to end with arrows, pointing to the head. The i, j entry in the adjacency matrix of a directed graph is the number of edges from vertex i to vertex j.
General: So-called “Discrete Mathematics” consists of those branches of mathematics which are concerned with the relations among fixed rather than continuously varying quantities, e.g., combinatorics and probability.
Topology: A topology on a set X is discrete if every subset of X is open, or equivalently if every one-point set of X is open.
Two sets are disjoint if they have empty intersection.
A union of sets which are disjoint.
A set of points consisting of a circle together with its interior points. The set consisting only of the interior points of a circle is called an open disk.
The distance between two points in a space is given by the length of the geodesic joining those two points. In Euclidean space, the geodesic is given by a straight line, and the distance between two points is the length of this line. The distance between two points a and b on a real number line is the absolute value of their difference, i.e., d(a, b) = |a - b|. In two (or more) dimensions, the distance is given by the (generalized) Pythagorean theorem, i.e., in a Cartesian coordinate system of n dimensions, where a = (a1, ... ,an) and b = (b1, ... ,bn), the distance d(a, b) is given by
The concept of distance may be generalized to more abstract spaces – such a distance concept is referred to as a metric.
Graph Theory: The length of the shortest path between two vertices of a graph. If there is no path between two vertices, their distance is defined to be infinite. The distance between two vertices v and u is denoted by d(v, u). In a connected graph, distance is a metric.
See distributive property.
A lattice is called distributive if for all elements x, y, and z of the lattice we have x (y z) = (x y) (x z) and x (y z) = (x y) (x z).
An algebraic property of numbers which states that for all numbers a, b, and c, a(b + c) = ab + ac.
Cf. commutative, associative.
To divide a number a by another number b is to find a third number c such that the product of b and c is a, that is, b × c = a. The number a is called the dividend, the number b is called the divisor, and the number c is called the quotient. The operation of dividing may be denoted by a horizontal or diagonal slash separating the dividend and divisor (with the dividend on top), or by a horizontal dash with a dot above and below it placed between the dividend and divisor.
In the case of whole numbers a and b there may not be a whole number quotient; however, there are always unique whole numbers q and r such that a = b × q + r, with r < b. In this case q is called the quotient and r is called the remainder. If in a particular case r = 0, we say that b divides a, and this is often denoted by b|a.
A number that is being divided.
A number that is dividing another. | http://www.mathacademy.com/pr/prime/browse.asp?LT=L&ANCHOR=derivative00000000000000000000&LEV=&TBM=Y&TAL=Y&TAN=Y&TBI=Y&TCA=Y&TCS=Y&TDI=Y&TEC=Y&TFO=Y&TGE=Y&TGR=Y&THI=Y&TNT=Y&TPH=Y&TST=Y&TTO=Y&TTR=Y&TAD=N | 13 |
72 | Introduction to Binary Numbers
How Computers Store Numbers
Computer systems are constructed of digital electronics. That means that their electronic circuits can exist in only one of two states: on or off. Most computer electronics use voltage levels to indicate their present state. For example, a transistor with five volts would be considered "on", while a transistor with no voltage would be considered "off." Not all computer hardware uses voltage, however. CD-ROM's, for example, use microscopic dark spots on the surface of the disk to indicate "off," while the ordinary shiny surface is considered "on." Hard disks use magnetism, while computer memory uses electric charges stored in tiny capacitors to indicate "on" or "off."
These patterns of "on" and "off" stored inside the computer are used to encode numbers using the binary number system. The binary number system is a method of storing ordinary numbers such as 42 or 365 as patterns of 1's and 0's. Because of their digital nature, a computer's electronics can easily manipulate numbers stored in binary by treating 1 as "on" and 0 as "off." Computers have circuits that can add, subtract, multiply, divide, and do many other things to numbers stored in binary.
How Binary Works
The decimal number system that people use every day contains ten digits, 0 through 9. Start counting in decimal: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, Oops! There are no more digits left. How do we continue counting with only ten digits? We add a second column of digits, worth ten times the value of the first column. Start counting again: 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 (Note that the right column goes back to zero here.), 21, 22, 23, ... , 94, 95, 96, 97, 98, 99, Oops! Once again, there are no more digits left. The only way to continue counting is to add yet another column worth ten times as much as the one before. Continue counting: 100, 101, 102, ... 997, 998, 999, 1000, 1001, 1002, .... You should get the picture at this point.
Another way to make this clear is to write decimal numbers in expanded notation. 365, for example, is equal to 3×100 + 6×10 + 5×1. 1032 is equal to 1×1000 + 0×100 + 3×10 + 2×1. By writing numbers in this form, the value of each column becomes clear.
The binary number system works in the exact same way as the decimal system, except that it contains only two digits, 0 and 1. Start counting in binary: 0, 1, Oops! There are no more binary digits. In order to keep counting, we need to add a second column worth twice the value of the column before. We continue counting again: 10, 11, Oops! It is time to add another column again. Counting further: 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111.... Watch the pattern of 1's and 0's. You will see that binary works the same way decimal does, but with fewer digits.
Binary uses two digits, so each column is worth twice the one before. This fact, coupled with expanded notation, can be used convert between from binary to decimal. In the binary system, the columns are worth 1, 2, 4, 8, 16, 32, 64, 128, 256, etc. To convert a number from binary to decimal, simply write it in expanded notation. For example, the binary number 101101 can be rewritten in expanded notation as 1×32 + 0×16 + 1×8 + 1×4 + 0×2 + 1×1. By simplifying this expression, you can see that the binary number 101101 is equal to the decimal number 45.
An easy way to convert back and forth from binary to decimal is to use Microsoft Windows Calculator. You can find this program in the Accessories menu of your Start Menu. To perform the conversion, you must first place the calculator in scientific mode by clicking on the View menu and selecting Scientific mode. Then, enter the decimal number you want to convert and click on the "Bin" check box to convert it into binary. To convert numbers from binary to decimal, click on the "Bin" check box to put the calculator in binary mode, enter the number, and click the "Dec" check box to put the calculator back in decimal mode.
How Hexadecimal Works
Binary is an effective number system for computers because it is easy to implement with digital electronics. It is inefficient for humans to use binary, however, because it requires so many digits to represent a number. The number 76, for example, takes only two digits to write in decimal, yet takes seven digits to write in binary (1001100). To overcome this limitation, the hexadecimal number system was developed. Hexadecimal is more compact than binary but is still based on the digital nature of computers.
Hexadecimal works in the same way as binary and decimal, but it uses sixteen digits instead of two or ten. Since the western alphabet contains only ten digits, hexadecimal uses the letters A-F to represent the digits ten through fifteen. Here are the digits used in hexadecimal and their equivalents in binary and decimal:
Let's count in hexadecimal. Starting from zero, we count 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. At this point there are no more digits, so we add another column. Continue counting: 10, 11, 12, 13, 14, 15, 16, 17 18, 19, 1A, 1B, 1C, 1D, 1E, 1F. Once again, we are out of digits in the first column, so we add one to the next column. Continue counting once again: 20, 21, 22, ..., 29, 2A, 2B, 2D, 2E, 2F, 30, 31, 32, ..., 3E, 3F, 40, 41, 42, ... 99, 9A, 9B, 9C, 9D, 9E, 9F, A0, A1, A2, ... F9, FA, FB, FC, FD, FE, FF, 100, 101, 102, .... Watch the pattern of numbers and try to relate this to the way you count in decimal or binary. You will see that it is the same procedure, but with sixteen digits instead of 10 or 2.
Each column in hexadecimal is worth 16 times the column before, while each column in binary is worth 2 times the column before. Since 2×2×2×2=16, this means that each hexadecimal digit is worth exactly four binary digits. This fact makes it easy to convert between binary and hexadecimal.
To convert from hexadecimal to binary, simply look at the chart above and replace each digit in the hexadecimal number with its corresponding four-digit binary number. For example, 8F in hexadecimal is 10001111 in binary, since 8=1000 and F=1111.
To converty from binary to hexadecimal, reverse the procedure and break the binary number into blocks of four digits. Then, replace each block of four digits with its corresponding hexadecimal digit. If you cannot divide the binary number evenly into blocks of four digits, add zeros to the left side of the number to make it work. For example, to convert 110101 to hexadecimal, first add two zeros at the beginning of the number to make it 00110101. Since 00110101 has eight digits, it can be divided into two blocks of four digits, 0011 and 0101. Since 0011=3 and 0101=5, the corresponding hexadecimal number is 35.
If you need more detailed information, you can find it from the many other pages of the Internet dedicated to the topic of binary and hexadecimal numbers. If you have any comments or concerns about this web page, please to not hesitate send them to me.
Czech Translation by Alex Slovak | http://www.swansontec.com/binary.html | 13 |
80 | Shear force and bending moment are the internal force and internal couple that arise when a beam is subjected to external loads. A beam is a structural member whose longitudinal dimension is much larger than its x-sectional dimensions and is loaded such that it bends transversely to its longitudinal axis. The distribution of the shear force and bending moment along the axis of the beam is dependent on the type of loading and support on the beam.
Depending on the type of support, a beam can be described as simply-supported
(or simple beam) if it is pinned at one end and has a roller at the other
end; cantilevered if it is fixed at one end only; continuous
if it spans over one or more internal supports in addition to the end supports,
or an overhang beam if some part of it extends beyond a pin or roller
Simple Beam Continuous Beam
Cantilever Beam Overhang Beam
The load on a beam can be described as concentrated or point load P if the load acts at just a point. If the load acts over a finite distance of the beam, it is said to be a distributed loading and is specified by the load intensity. A distributed load is said to be uniform if it has a constant intensity q or linearly varying if the intensity varies linearly from q1 to q2 over a finite portion of the beam. The distributed loading may also vary non-linearly. A beam may also be subjected to a moment load, M.
The effect of the external force P is to create internal stresses and strains within the beam. The external force has to be resisted by internal forces and couples that act at every x-section of the beam. The intensities of these internal forces give rise to the stresses and strains within the beam. The external force is finally transferred to the beam supports and is resisted by the reactions at the supports.
Consider the beam below loaded by the force P. Imagine that the beam
is cut at a cross-section m-n located at a distance x from the left support
and the left-hand part is isolated as a free-body.
The free-body is held in equilibrium by the reaction RAv and RAH at the support and the internal forces Va and Ha and internal couple of moment Ma. The internal force Va, acting parallel to the cross-section is called the shear force while the internal force, Ha acting normal to the cross-section and in the longitudinal direction of the beam is called axial force. The internal couple of moment Ma is called the bending moment.
From equilibrium consideration,
Ha = RAH
Va = RAv
Ma = RAv. x
Note that for most beams, the force P will act in a direction perpendicular to the longitudinal axis of the beam(i.e. tranverse to the beam axis) hence there will be no internal axial force for most beams.
For design purposes, it is important to know the maximum value of Va and Ma and the cross-section where that occurs. This is generally achieved through the evaluation and plotting of the shear force and bending moment at every cross-section of the beam. This plot is known as the shear force and bending moment diagrams. Design of the beam is then based on the maximum values of the shear force and bending moment.
For statically determinate beams, the support reactions can be determined from only equilibrium equations. Having calculated the support reactions, the expressions for the shear force and bending moment are written for each segment or region of the beam. The shear force and bending moment diagrams are then drawn using the expressions determined for the various regions of the beam.
The procedure involved in solving problems concerned with determining the shear force and bending moment distribution on beams can be summarized in the following steps:
A Calculate the support reactions. Remove the supports
and draw the freebody diagram of the
beam. Use equilibrium conditions to determine the support reactions.
B Determine the shear force and bending moment equations for each region of beam. Note that each expression is valid only within the region for which it was derived and cannot be applied outside that region.
C Draw shear force (S.F.) and bending moment (B.M) diagrams.
Use the equations determined
from B and ensure that each equation is applied only within the region in which it is valid.
1. Establish how many regions you have on the beam.
2. Calculate the support reactions for the entire beam
- Consider equilibrium of the free body diangram for the entire beam
3. Write the shear force and bending moment equations for each region
of the beam.
- Consider equilibrium of the left or right hand portions of the beam, obtained by passing a section at any point in the
- Keep track of your choice of co-ordinate origin and the range of validity of x for each region.
4. Plot the shear force and bending moment diagrams.
- Use the expressions / equations obtained in Step 3 for each region to plot the SF and BM diagrams.
- If the shear force or BM is a contant in a region it means that the SF and BM does not vary within the region but has
the same value at all points in that region.
- If the SF or BM equation in a region is linear or first order function, then SF or BM will vary linearly in the region.
You need to determine the value of SF or BM at two points in the region and connect these with a straight line.
- If the SF or BM equation is a higher order function (quadratic, cubic, etc.), determine if the function has a maximum
or minimum value in the origin and obtain locations and values for the maximum and minimum SF or BM. Compute
values of the function of 5 or more of the locations and join with a smooth curve.
Determine the maximum shear force and bending moment in a simple beam
of span L that is loaded by a concentrated force P at midspan. Draw the
shear force and bending moment diagrams.
Determine the shear force and bending moment distributions in a simply-supported
beam of span L that is loaded by two equal concentrated load P as shown.
For the cantilever beam of length L subjected to a uniformly distributed
load (udl) of intensity q, draw the shear force and bending moment diagrams.
For the simply-supported beam of length, L loaded by a udl of intensity
q along its entire length, draw the shear force and BM diagram.
Example 4.5: (Problem 4.2-1)
Determine the shear force and bending moment at the middle of the simple
beam AB shown below.
Example 4.6: (Problem 4.2-10)
The beam ABC shown in the figure is attached to a pin support at C and
a roller support at A. A uniform load of intensity q acts on part AB and
a triangular load of maximum intensity 2q acts on part BC. (a) Obtain an
expression for the bending moment M in part AB at distance x from support
A. (b) From this expression, determine the maximum bending moment Mmax
in part AB.
Given: Load Intensity Curve, show that:
Solution1. Resultant load is equal to area under the curve.
2. Resultant load or total pressure acts through centroid of area under the curve.
A simple supported beam carries a vertical load that increases uniformly from zero at the left support to a maximum value of q kN/m at the right support. Draw the shear force and bending moment diagrams. Assume the q = 9 kN/m and L=6m
For purposes of writing equilibrium equations, the entire distributed load
associated with the relevant free body diagram is replaced by it's resultant
(which is equal to the area occupied by the distributed load) which acts
through the centroid of the distributed load.
Given: Load intensity curve or pressure intensity curve.
1. Resultant load or total pressure is equal to area under the curve.
2. Resultant load or total pressure
acts through centroid of area under the curve.
Relationship Between Load, Shear Force and Bending Moment
Consider an element of a beam that is cut out between two x-sections that are distance dx apart (fig. 4.8a). It is assumed that a distributional load of intensity q acts on the top surface of an element. The internal forces on both faces of the element are as shown below:
Note: The distance dx is infinitesimally small.
From equilibrium forces in the vertical direction, we have that
V - (V + dV) - qdx = 0
dV/dx = -q ---- (1)
This expression shows that the rate of change of V with respect to x
is equal to -q, where q is the udl and is positive when acting downward.
From equation 1 above, we have that
This expression shows that the difference VB - VA of the shear forces at two sectors B and A, is equal in magnitude to the resultant of the distributed load between the two sections. The area of the load-intensity diagram may be treated as either positive or negative, depending on whether q acts downward or upward, respectively.
Note that these equations were derived for the case
of uniformly distributed loading and will not apply where a point load
is acting on the beam. Equation 2 cannot be used to find the difference
in shear forces between the two points if a ponit load acts on the beam
between the two points, since the intensity of load q is undefined for
a point load.
Next, let us consider equilibrium by summing moments about an axis through the left hand face of the element and perpendicular to the plane of the figure.
m + qdx (dx/2) + (V + dV)dx - (m+ dm) = 0
Neglecting products oof differentials because they are very small in comparison to the other terms, we obtain that
dm/dx = V ----- (3)
This experssison shows that the rate of change of the bending moment m with with respect to x is equal to the shear force. Note again that this equation applies only in regions where distributed loads or zero load (ie. q=0) act on the beam. At a point where a concentrated load acts, a sudden change (or discontinuity) in the shear force occurs and the derivative dm/dx is undefined.
Observe that from equation
This implies that the difference in bending moment between any two sections A and B is equal to the area of the shear force diagram between two points.
Note that equation 3 shows that the bending moment is a maximum where the shear force is zero.
Considering a concentrated load P acting on top of the elemental area,
Considering equilibrium of forces in the vertical direction:
S fq = 0;
V-P-(V + Vi) = 0
==> Vi = -P ---- (5)
This implies that as we move from left to right of the point of application of a concentrated load, the shear force changes abruptly and decreases by an amount equal to the maginitude of the downward point load.
Considering equiblibrium of moments about the left-face:
-M - P(dx/2) - (V + V1)dx + M + M1 = 0
==> M1 = P(dx/2) + Vdx + V1dx = 0
since dx is infinitesimally small. Hence, we see that the BM does
not change as we pass through the point of applications of a concentrated
load. Note that even though the BM does not change dM/dx changes
abruptly since dM/dx = V = -P for the elemental area shown above, dM/dx
= V on the left face and dM/dx = V + V1 on the right face.
Thus at the point of application of the concentrated load dM/dx changes
abruptly and will decrease by an amount equal to the concentrated load.
ie. V1 = -P
1). Observe that the shear force to the left of the concentrated load id constant and equal to P/2 while to the right of the concentrated load, the shear force is -P/2.
2). Observe that the slope of the shear force diagram, dV/dx, on either side if the concentrated load is zero, indicating zero load density, q.
3). The slope of the BM diagram dM/dx to the left of the concentrated load is PL/4/(L/2) = P/2 while that to the right of the concentrated load is PL/4/(-L/2) = -P/2. These values are equal to the shear force on these segments of the beam.
4). At the point of application of the concentrated load, there is an abrupt change in the shear force diagram (equal to p) and a corresponding change in the slope of the BM diagram.
5). Observe that the difference in the BM between
any two points A and B is equal to the area of the shear force diagram
between the two points. Note that this would not be the case if the
beam were subjected to a load in the form of a couple.
Using the relationship between
distributed load, shear force and bending moment, obtain the expressions
for the shear force and bending moment for the two beams shown below. | http://people.stfx.ca/eoguejio/235/4_SF&BM_Diagrams/shear.htm | 13 |
50 | Before dawn on 8 November 1942, American soldiers waded through the surf of North African beaches in three widely separated areas to begin the largest amphibious operations that had ever been attempted in the history of warfare. These troops were the vanguard for a series of operations that eventually involved more than a million of their compatriots in action in the Mediterranean area. One campaign led to another. Before the surrender in May 1945 put an end to hostilities in Europe, American units in the Mediterranean area had fought in North Africa, Sicily, Italy, Sardinia, Corsica, and southern France.
Footnote one is a list of suggested readings and has been moved to the end of this file to enhance readability.
The decision to take the initiative in the West with an Allied invasion of North Africa was made by Prime Minister Winston S. Churchill and President Franklin D. Roosevelt. It was one of the few strategic decisions of the war in which the President overrode the counsel of his military advisers.
The reasons for it were as much political as military. At first TORCH, as the operation was called, had no specific military objective other than to effect a lodgment in French North Africa and to open the Mediterranean to Allied shipping. It stemmed mainly from a demand for early action against the European members of the Axis, and ostensibly was designed to ease the pressure on the hard-pressed Soviet armies and check the threatened advance of German power into the Middle East.
A combined Anglo-American attack on North Africa might have come earlier had it not been for the pressing need to use the extremely limited resources of the Allies to defend the eastern Mediterranean and stem the Japanese tidal wave that ultimately engulfed Burma, Malaya, the East Indies, the Philippines, and large areas of the southwest Pacific. In fact the invasion of North Africa had been a main topic of discussion between President Roosevelt, Prime Minister Churchill, and their chief military advisers, known collectively as the Combined Chiefs of Staff (CCS), at the first of the Allied wartime conferences held in Washington (ARCADIA) during the week before Christmas 1941. The thought of a North African undertaking at that time was inspired by hope of winning the initiative at relatively small cost and "closing and tightening the ring" around Germany, preparatory to a direct attack upon the core of its military power.
American military leaders had long appreciated the fact that the occupation of North Africa held the promise of producing valuable results for the Allied cause. (See Map II, inside back cover.) It would prevent Axis penetration of the French dependencies in that region, help secure the British line of communication through the Mediterranean, and provide a potential base for future land operations in the Mediterranean and southern Europe. Nevertheless, they were opposed on
For a full discussion of the views presented at ARCADIA, see Matloff and Snell, Strategic Planning, 1941-1942. Memo, COS for CsofS, 22 Dec 41; sub: American-British Strategy, Operations Division (OPD) files ABC 337 ARCADIA (24 Dec 41). Joint Board (B) 35 Ser. 707, 11 Sep 41, Sub: Brief of Strategic Concept of Operations Required to Defeat Our Potential Enemies. Before TORCH there were a number of plans for the invasion of North Africa. As early as the spring of 1941 the U.S. Joint Board had begun work on plans to seize Dakar. The code name for this operation was BLACK later changed to BARRISTER. GYMNAST and SUPER-GYMNAST contemplated joint operations with the British in the Casablanca area. The British also had a plan for a landing in Tunisia. For additional details on GYMNAST and SUPER- GYMNAST see Matloff and Snell, Strategic Planning for Coalition Warfare, 1941-1942, Chapters XI and XII.
strategic grounds to the dissipation of Allied strength in secondary ventures. Confident that America's great resources eventually would prove the decisive factor in the war, they favored a concentration of force in the United Kingdom for a massive attack against western Europe at the earliest possible time.
The British accepted the American view that the main blow would eventually have to be delivered in western Europe, but they hesitated to commit themselves on when and where it should fall. Even at this early stage they showed a preference for peripheral campaigns to be followed by a direct attack on the enemy only after he had been seriously weakened by attrition. Such a "peripheral strategy" came naturally to British leaders. They had followed it so often in earlier wars against continental powers that it had become deeply imbedded in England's military tradition. But another factor that led them to shy away from an immediate encounter with the enemy on his home grounds was the vivid memory of earlier disasters on the Continent. About these the British said little at this time but that the fear of another debacle influenced their arguments can be taken for granted. Later it was to come more openly to the surface.
Churchill and Field Marshal Sir Alan Brooke, Chief of the Imperial General Staff, from the outset stressed the advantages of a North African operation. They made much of the tonnage that would be saved by opening the Mediterranean and the likelihood that the French in North Africa, despite the fact that they were torn by dissension, would co-operate with the Allies once they landed. Thus France would be brought back into the struggle against the Axis.
While the majority of American military leaders had their doubts about the value of a North African invasion and its chances of success, President Roosevelt was attracted to the idea largely because it afforded an early opportunity to carry the war to the Germans. In his opinion it was very important to give the people of the United States a feeling that they were at war and to impress upon the Germans that they would have to face American power on their side of the Atlantic. Because of the interest of the two political heads, who in many matters saw eye to eye, the Combined Chiefs of Staff, without committing themselves definitely to any operation, agreed at the ARCADIA Conference to go ahead with a plan to invade North Africa.
Memo, WPD for CofS, 28 Feb 42, sub: Strategic Conceptions and Their Applications to SWPA, OPD files, Exec 4, Envelope 35; Notation by Eisenhower, 22 Jan 42 entry, Item 3. OPD Hist Unit File. The date for such an assault as estimated in early 1942 was to be sometime in the spring of 1943. Notes, GCM [George C. Marshall], 23 Dec 41, sub: Notes on Mtg at White House With President and Prime Minister Presiding, War Plans Division (WPD) 4402-136.
The task of working out such a plan was given to General Headquarters (GHQ) in Washington. By combining the main features of GYMNAST and a British scheme to attack Tunisia, GHQ produced a plan in record time called SUPER-GYMNAST. This plan was first submitted for review to Maj. Gen. Joseph W. Stilwell, who had been working on plans to seize Dakar, and then to Maj. Gen. Lloyd R. Fredendall. On the basis of their comments a revised plan was drawn up and approved on 19 February 1942.
Soon thereafter, unforeseen developments arose that prevented immediate implementation of the revised plan. Among these were the heavy losses the British Navy suffered in the Mediterranean and the Japanese advances in southeastern Asia, the Philippines, and the Netherlands Indies which made it imperative to give the Pacific area first call on American resources, particularly in ships. The shipment of men and supplies to the threatened areas put so great a strain on the Allied shipping pool, already seriously depleted by the spectacular success of German U-boats, that little was available for an early venture into North Africa or anywhere else. Before the situation eased, preparations for meeting the German Army head on in Europe, known as BOLERO, had received the green light in priorities over SUPER-GYMNAST.
As in the case of SUPER-GYMNAST BOLERO had its roots in strategic thinking that antedated Pearl Harbor. Months before 7 December, basic Anglo-American strategy, in the event of America's entry into the war, called for the defeat of Germany, the strongest Axis Power, first. This grand strategic concept was discussed as a hypothetical matter in pre-Pearl Harbor British-American staff conversations held in Washington between 29 January and 27 March 1941 and later set forth in the Allied agreement (ABC-1) and in the joint Army-Navy plan, RAINBOW 5, which were submitted to the President in June 1941. While sympathetic toward the strategy in both ABC-1 and RAINBOW 5, Roosevelt refrained from approving either at the time, probably for political reasons. At the ARCADIA Conference in December 1941, the basic strategic concept was confirmed and a de-
The code name GYMNAST continued to be used loosely by many to apply to SUPER-GYMNAST as well as the original plan. Interv with Brig Gen Paul M. Robinett. USA (Rt.). 29 Jun 56. OCMH. Morison, Battle of the Atlantic, Chs. VI, VII. Ltr, Secy War and Secy Navy to President, 2 Jun 41, copy filed in JB 325. Ser.
cision was made to begin the establishment of an American force in the United Kingdom. This decision, however, "was not definitive" since it was essentially based on the need of protecting the British Isles and did not include their use as a base for future offensive operations against the Continent. The omission troubled many American leaders, including Secretary of War Henry L. Stimson, who in early March tried to persuade the President that "the proper and orthodox line of our help" was to send an overwhelming force to the British Isles which would threaten an attack on the Germans in France. In this he was supported by the Joint Chiefs of Staff who had accepted the detailed analysis of the military situation, worked out by the War Plans Division under Brig. Gen. Dwight D. Eisenhower in late February. As a result the President replied to the Prime Minister on 8 March that in general the British should assume responsibility for the Middle East, the United States for the Pacific, and both should operate jointly in the Atlantic area. At the same time, the American planners were assigned the task of preparing plans for an invasion of northwest Europe in the Spring of 1943.
The principal argument for selecting this area for the main British-American offensive was that it offered the shortest route to the heart of Germany and so was the most favorable place in the west where a vital blow could be struck. It was also the one area where the Allies could hope to gain the necessary air superiority, where the United States could "concentrate and maintain" the largest force, where the bulk of the British forces could be brought into action, and where the maximum support to the Soviet Union, whose continued participation in the war was considered essential to the defeat of Germany, could be given. By 1 April an outline draft, which came to be known first as the Marshall Memorandum and later as BOLERO, was far enough advanced to be submitted to the President who accepted it without reservation and immediately dispatched Mr. Harry Hopkins and General George C. Marshall, Army Chief of Staff, to London to obtain British approval.
As originally conceived, BOLERO contemplated a build-up of military power in the United Kingdom simultaneously with continuous raids against the Continent, to be followed by a full-scale attack on Hitler's "Festung Europa" in the spring of 1943. Later the code name ROUNDUP was applied to the operational part of the plan. Under this plan forty-eight divisions, 60 percent of which would be American, were to be placed on the continent of Europe by Septem-
Stimson and Bundy, On Active Service, pp. 415-16. Ibid., pp. 418-19; Matloff and Snell, Strategic Planning, 1941- 1942, pp. 183-85; Bryant, Turn of the Tide, p. 280.
ber of that year. Included in BOLERO was a contingent alternate plan known as SLEDGEHAMMER, which provided for the establishment of a limited beachhead on the Continent in the fall of 1942 should Germany collapse or the situation on the Eastern Front become so desperate that quick action in the west would be needed to relieve German pressure on the Soviet Union.
In London Hopkins and Marshall outlined the American plan to the British. While stressing BOLERO as a means of maintaining the Soviet Army as a fighting force, they also emphasized the need of arriving at an early decision "in principle" on the location and timing of the main British-American effort so that production, allocation of resources, training, and troop movements could proceed without delay.
Churchill seemed to be warmly sympathetic to the American proposal to strike the main blow in northwestern Europe, and described it as a "momentous proposal" in accord with "the classic principle of war-namely concentration against the main enemy." But though the Prime Minister and his advisers agreed "in principle," Marshall was aware that most of them had "reservations regarding this and that" and stated that it would require "great firmness" to avoid further dispersions. That he was right is borne out by the fact that Churchill later wrote that he regarded SLEDGEHAMMER as impractical and accepted it merely as an additional project to be considered along with invasion of North Africa and perhaps Norway as a possible operation for 1942. At all events, BOLERO was approved by the British on 14 April with only one strongly implied reservation: it was not to interfere with Britain's determination to hold its vital positions in the Middle East and the Indian Ocean area.
While BOLERO-SLEDGEHAMMER was acceptable to the British in mid-April, it remained so for less than two months. By early May
Min of Mtg, U.S.-British Planning Staffs, London, 11 Apr 42, Tab N. ABC 381 BOLERO (3-16-42), 5. For a fuller treatment of these discussions see Gordon A. Harrison, Cross-Channel Attack (Washington, 1951), pp. 13 18, in UNITED STATES ARMY IN WORLD WAR II. Ltr atchd to Min of Mtg, U.S. Representatives-British War Cabinet, Def Com. 14 Apr 42, Chief of Staff 1942-43 files, WDCSA 381.1. Msg, Marshall to McNarney, 13 Apr 42, CM-IN 3457. Churchill, Hinge of Fate, pp. 323-24. Paper, COS, 13 Apr 42, title: Comments on Gen Marshall's Memo, COS (42)97(0) Tab F, ABC 381 BOLERO (3-16-42), 5; Churchill, Hinge of Fate, pp. 181-85; Bryant, Turn of the Tide, pp. 286-87. Stimson and Bundy, On Active Service, pp. 418-19.
they were expressing strong doubts that the resources to launch an early cross-Channel operation could be found. In part the uncertainty was due to the state of the American landing craft production program which was not only lagging far behind schedule but was indefinite as to type and number. What the full requirements in craft would be no one actually knew, for all estimates in regard to both number and type were impressionistic. In the original outline plan, the number needed had been placed at 7,000. This was soon raised to 8,100 by the Operations Division (OPD), still too conservative an estimate in the opinion of many. Lt. Gen. Joseph T. McNarney, Deputy Chief of Staff, for example, considered 20,000 a more realistic figure. As to type, the Army had placed orders with the Navy for some 2,300 craft, mostly small 36-foot vehicle and personnel carriers, for delivery in time for a limited operation in the fall. These, along with 50-foot WM boats (small tank lighters), were considered sufficiently seaworthy by the Navy to negotiate the waters of the English Channel. The rest of the 8,100 were expected to be ready for delivery in mid-April 1943, in time for ROUNDUP.
This construction program, seemingly firm in early April, soon ran into difficulties. Toward the end of April the Navy, after re-examining its own requirements for amphibious operations in the Pacific and elsewhere, concluded it needed about 4,000 craft. If its estimates were allowed to stand, only about half of the Army s needs for SLEDGEHAMMER could be met in the construction program. Some of the resulting deficit might possibly be made up by the British, but this seemed unlikely at the time for their production was also behind schedule.
The second obstacle arose when the British questioned the ability of the landing craft on which construction had begun to weather the severe storms that prevailed in the Channel during the fall and winter months. They convinced the President that their objections to the type of craft under construction in the United States were sound, as indeed they were. The result was that a new program, which shifted the emphasis to the production of larger craft, was drawn up and placed under British guidance. Like the earlier program this one also underwent a series of upward changes.
As the requirements rose, the prospects of meeting them declined. In late May it was still possible to expect delivery in time for ROUNDUP in the spring of 1943 but the hope of obtaining enough craft for SLEDGE-
Bryant, Turn of the Tide, pp. 300-301.Page 180
Leighton and Coakley, Global Logistics, 1940-1943, p. 377.
Ibid. pp. 379-80.
HAMMER had dwindled. If the latter operation was to be undertaken at all, it would have to be executed with what craft and shipping could be scraped together. This, of course, would increase the danger that SLEDGEHAMMER would become a sacrificial offering launched not in the hope of establishing a permanent lodgment but solely to ease the pressure on the Soviet armies. For this the British, who would be required to make the largest contribution in victims and equipment, naturally had no stomach.
In late May when Vyacheslav M. Molotov, the Soviet Foreign Commissar, visited London to urge the early establishment of a second front in western Europe, he found Churchill noncommittal. The Prime Minister informed him that the British would not hesitate to execute a cross-Channel attack before the year was up provided it was "sound and sensible," but, he emphasized, "wars are not won by unsuccessful operations."
In Washington a few days later, Molotov found that a different view on SLEDGEHAMMER from the one he had encountered in London still prevailed. Roosevelt, much more optimistic than Churchill, told him that he "hoped" and "expected" the Allies to open a second front in 1942 and suggested that the Soviet Union might help its establishment by accepting a reduction in the shipment of lend-lease general supplies. The conversations ended with a declaration drafted by Molotov and accepted by the President which stated that a "full understanding was reached with regard to the urgent tasks of creating a Second Front in Europe in 1942." This statement, although not a definite assurance that a cross-Channel invasion would soon be launched, differed considerably from the noncommittal declarations of the Prime Minister. It clearly indicated that Washington and London were not in full accord on the strategy for 1942 and that further discussions between U.S. and British leaders were necessary to establish a firm agreement.
By the time of the second Washington conference in June 1942 the Prime Minister and his close military advisers, if they ever truly accepted the U.S. strategy proposed by Marshall, had definitely undergone a change of mind. They now contended that an emergency invasion in 1942 to aid Russia would preclude a second attempt for years to come and therefore no direct attack should be undertaken
Quoted in W. K. Hancock and M. M. Gowing, British War Economy, History of the Second World War, United Kingdom Civil Services, (London: H.M. Stationery Office, 1949), pp. 406-07. Matloff and Snell, Strategic Planning, 1941-1942, pp. 231-32; Sherwood, Roosevelt and Hopkins, pp. 568-70. Matloff and Snell, Strategic Planning, 1941-1942, pp. 231-32.
unless the German Army was "demoralized by failure against Russia."
Aware of the fact that the British had grown cool to SLEDGEHAMMER, if not to ROUNDUP, as the strategy for 1942 and 1943 and anxious to get American troops into action against the main enemy as quickly as possible, President Roosevelt in mid-June sounded out his military advisers on the resurrection of GYMNAST. The suggestion met with strong dissent from Secretary of War Stimson and General Marshall, both of whom now were convinced that the British were just as much opposed to ROUNDUP for 1943 as they were to SLEDGEHAMMER in 1942.
In deference to their views, Roosevelt refrained from openly supporting the British position during the June conference in Washington, with the result that the meetings ended with BOLERO and ROUNDUP-SLEDGEHAMMER ostensibly still intact as the basic Anglo-American strategy in the North Atlantic area. But Churchill's vigorous arguments against a 1942 cross-Channel invasion of the Continent and Roosevelt's lively and unconcealed interest in the Mediterranean basin as a possible alternative area of operations indicated that the opponents of diversionary projects were losing ground. The defeat of the British Eighth Army in a spectacular tank battle at Knightsbridge in Libya on 13 June, the subsequent fall of Tobruk on 21 June, followed by the rapid advance of Field Marshal Erwin Rommel's army toward Alexandria and the Suez Canal, further weakened the position of the U.S. military leaders, for as long as Commonwealth forces were fighting with their backs to the wall in Egypt no British Government could be expected to agree to a cross-Channel venture.
Churchill, who had hurriedly returned to England in the crisis created by Rommel's victories, soon made it unmistakably clear that he was adamant in his opposition to any plan to establish a bridgehead on the Continent in 1942. A premature invasion, he reiterated in a cable to Roosevelt, would be disastrous. Instead he recommended that the American military chiefs proceed with planning for GYMNAST while the British investigated the possibility of an attack on Norway (JUPITER) a pet project of his. To his representative in Washington, Field Marshal Sir John Dill, he sent a message making it clear that he wanted a North African operation. "GYMNAST," he stated,
Memo, COS for War Cabinet, 2 Jul 42, sub: Future Operations WP (42)Page 182
278, (COS 42)195(0), ABC 381 (7-25-42) Sec. 4-B, 19; Matloff and Snell,
Strategic Planning, 1941-1942, p. 266.
Stimson and Bundy, On Active Service, p. 419.
Churchill, Hinge of Fate, pp. 334-35.
"affords the sole means by which the U.S. can strike at Hitler in 1942 .... However if the President decided against GYMNAST the matter is settled" and both countries would have to remain "motionless in 1942." But for the time being the impetuous Prime Minister was in no position to press strongly for the early implementation of the project, eager though he was to assume the offensive. For weeks to come the military situation would demand that every ton of available shipping in the depleted Allied shipping pool be used to move men, tanks, and other materials around southern Africa to hold Egypt and bolster the Middle East against Rommel's army and the even more potentially dangerous German forces in Russia that had conquered Crimea and were massing for an offensive that might carry them across the Caucasus into the vital oil-rich regions of Iraq and the Persian Gulf.
Strong support for the Prime Minister's objections to a premature invasion of the Continent had come from the British Chiefs of Staff. After considering the advantages and disadvantages of SLEDGEHAMMER, they stated in their report to the War Cabinet on 2 July: "If we were free agents we could not recommend that the operation should be mounted." In reaching this conclusion they were ostensibly persuaded by two reports, one from Lord Leathers, British Minister of War Transport, who had estimated that the operation would tie up about 250,000 tons of shipping at a time when shipping could ill be spared, and the other from Lord Louis Mountbatten, which pointed out that, in the absence of sufficient landing craft in the United Kingdom, all amphibious training for other operations, including cross-Channel in 1943, would have to be suspended if SLEDGEHAMMER were undertaken. The War Cabinet immediately accepted the views of the British Chiefs of Staff and on 8 July notified the Joint Staff Mission in Washington of its decision against an operation on the Continent even if confined to a "tip and run" attack.
In submitting its views on the strategy to be followed, the War Cabinet carefully refrained from openly opposing ROUNDUP as an operation for 1943. But the effect was the same since it was not possible to conduct both the African invasion and the cross-Channel attack with the means then at the disposal of the Allies.
See JCS 24th Mtg, 10 July 42; Msg, Churchill to Field Marshal Dill,Page 183
12 Jul 42, ABC 381 (7-25-42) Sec. 4-B; Bryant, Turn of the Tide, pp.
How serious the British considered this latter threat to their
vital oil resources is clearly indicated in the many references to it in
Field Marshal Brooke's diary. See Bryant, Turn of the Tide, Chs. 8, 9.
Memo, COS for War Cabinet, 2 Jul 42, sub: Future Opns WP (42) 278
(COS 42), ABC 381 (7-25-42) Sec. 4-B, 19.
Msg, War Cabinet Offs to Joint Staff Mission, 8 Jul 42; Leighton
and Coakley, Global Logistics, 1940-1943, p. 384.
Because of the lag in landing craft construction, the Joint Chiefs of Staff realized that SLEDGEHAMMER was rapidly becoming a forlorn hope. By the end of June, out of a total of 2,698 LCP's, LCV's, and LCM's estimated as likely to be available, only 238 were in the United Kingdom or on the way. By mid-July General Hull informed Eisenhower, who had gone to London, "that all the craft available and en route could land less than 16,000 troops and 1,100 tanks and vehicles." This was 5,000 troops and 2,200 tanks less than the estimates made in mid-May. Despite these discouraging figures, Marshall and King stubbornly continued to object to dropping SLEDGEHAMMER from the books, not because they wanted it but because they clearly recognized that the fate of ROUNDUP was also at stake in the British Government's attitude toward the emergency operation. Whether in earnest or not they now went so far as to advocate that the United States should turn its back on Europe and strike decisively against Japan unless the British adhered "unswervingly" to the "full BOLERO plan." This attitude so impressed Field Marshal Dill that he seriously considered cabling his government that further pressure for GYMNAST at the expense of a cross-Channel operation would drive the Americans into saying, "We are finished off with the West and will go out in the Pacific." What Dill did not know was that Roosevelt was opposed to any action that amounted to an "abandonment of the British." Nor did the President openly agree with his Joint Chiefs of Staff that the British would be as unwilling to accept a large-scale cross-Channel attack in 1943 as in 1942, whatever their present views. He was still determined to commit the Western Allies to action against the Germans before the end of the year, somehow and somewhere. If an agreement with the British on a cross-Channel attack could not be reached he was quite willing to settle for some other operation. Unlike his chief military advisers, he was far from hostile to a campaign in the Mediterranean, the Middle East, or elsewhere in the Atlantic area, if circumstances ruled out SLEDGEHAMMER or ROUNDUP. In fact, Secretary Stimson believed he was weakening on BOLERO and considered him somewhat enamored of the idea of operations in the Mediterranean. The President's willingness to accept a substitute for an early invasion of Europe appears in the instructions he gave Harry Hopkins, General Marshall, and Admiral King
Leighton and Coakley, Global Logistics, 1940-1943, p. 382. Ibid. Memo, King and Marshall for President, 10 Ju1 42, WDCSA file BOLERO. Draft Cable in CofS file ABC 381 (7-25-42) Sec. 1. Msg, Roosevelt to Marshall, 14 Jul 42, WDCSA file BOLERO; Sherwood, Roosevelt and Hopkins, p. 602. Stimson and Bundy, On Active Service, p. 425.
when he sent them to England on 18 July with large powers to make a final effort to secure agreement on a cross-Channel attack. Should they become convinced after exploring all its angles with the British that such an operation would not prevent "the annihilation of Russia" by drawing off enemy air power, they were to consider other military possibilities.
As might have been expected, the American delegates failed to convince Churchill or the British military chiefs that an early assault on the Continent was practical. The Prime Minister, after questioning both the urgency and feasibility of SLEDGEHAMMER, again emphasized the value of a North African operation and suggested that if the approaching battle for Egypt went well, it might be possible to carry the war to Sicily or Italy.
A realistic estimate of the military situation at the time indicated that launching a successful operation against the mainland of Europe in 1942 was far from bright. Allied war production potential was still comparatively undeveloped and battle-tested divisions were unavailable. Landing craft, despite a high production priority ordered by the Navy in May, were still scarce, shipping was woefully short, and modern tanks, capable of meeting those of the enemy on equal terms, were just beginning to roll off the assembly lines. Even if the production of materiel could be speeded up time was required to raise and organize a large force and train units in the difficult techniques of amphibious warfare. By according additional overriding priorities to BOLERO, the flow of men, equipment, and supplies to the United Kingdom could be increased, but this meant running the grave danger of crippling forces already engaged with the enemy. Should this risk be accepted, there still remained the problem of erecting a logistical organization that could feed men, equipment, and supplies into the battle area without interruption. Considerable progress had been made in building such an organization in the United Kingdom but it was still far from perfect. Taking all these matters into consideration, along with the likelihood that the Germans would have enough strength in France and the Lowlands to contain an invasion without weakening their eastern front, the Combined Chiefs of Staff concluded that, at best, the only landing that could be made on the Continent in 1942 would be a minor one, aimed at securing a foothold with a port and holding and consolidating it during the winter. But the hard facts mutely argued against pitting any force against a veteran
Memos, Roosevelt for Hopkins, Marshall, and King, 16 Jul 42, sub: Instructions for London Conf, Jul 42, signed original in WDCSA 381, Sec. 1; Sherwood, Roosevelt and Hopkins, pp. 603-05; Matloff and Snell, Strategic Planning, 1941-1942, p. 273. Combined Staff Conf, 20 Jul 42, WDCSA 319.1; Matloff and Snell, Strategic Planning, 1941-1942, p. 278.
army on the chance that it would be sustained during the stormy winter weather.
The Americans saw this as clearly as the British. As realists, they knew that an operation in execution would take priority over one in contemplation, and that it would generate pressures that could upset the basic strategy agreed upon for Europe. The weakness of their stand was that nearly a year would probably elapse during which few Americans other than those in the air force would be in action against the Germans. Such a situation the impatient President whose full support they needed could not bring himself to accept. Knowing this, Churchill and the British Chiefs of Staff reiterated time and again the advantages of a North African operation in conjunction with a counteroffensive in Libya. They stressed all the old arguments: it could lead to the liberation of Morocco, Algeria, and Tunisia, bring the French there back into the war against the Axis, open the Mediterranean to through traffic thus saving millions of tons of shipping, cause the withdrawal of German air power from Russia, and force the Germans and Italians to extend themselves beyond their capacity in reinforcing their trans-Mediterranean and southern front. They would not admit that a North African operation in 1942 would rule out ROUNDUP and contended instead that early action in the Mediterranean would lead to a quick victory which would still permit it to be launched in 1943.
The Americans, on the other hand, continued to hold out for SLEDGEHAMMER. They resisted the idea of dropping SLEDGEHAMMER, primarily in order to forestall a diversionary and indecisive operation which would syphon off resources and prevent a true second front from being established in 1943. Marshall and King, if not Hopkins, were certain that the fate of ROUNDUP was at stake and held as firmly as ever the belief that a direct attack against the Continent was the only way to assist the hard-pressed Soviet armies and seriously threaten the military power of Germany. But because of the President's instructions to agree to some military operations somewhere in 1942, it was impossible for them to hold their ground indefinitely. Their position was not strengthened by the course of events in Russia, in the Middle East, and in the Atlantic, or by the opinion expressed by General Eisenhower-recently appointed Commanding General, European Theater of Operations, United States Army (ETOUSA)-that SLEDGEHAMMER had less than a fair chance of success. Nor were they helped by the secret message from Roosevelt to
Memo, Conclusions as to Practicability of SLEDGEHAMMER, 17 Jul 42; Diary of Commander in Chief, OPD Hist Unit file. This memorandum was prepared by General Eisenhower after consultation with Maj. Gen. Mark W. Clark, Maj. Gen. John C. H. Lee, and Col. Ray W. Barker.
Churchill, saying that "a Western front in 1942 was off" and that he was in favor of an invasion of North Africa and "was influencing his Chiefs in that direction." Furthermore, since a cross-Channel operation to ease the pressure on the Soviet Union would have to be carried out primarily by British forces, because the shipping shortage precluded the flow of U.S. troops and aircraft to the United Kingdom in large proportions before the late fall of 1942, the American representatives could not insist on it. Marshall therefore refrained from pressing for the retention of SLEDGEHAMMER in the BOLERO plan after 23 July but continued to insist on ROUNDUP. This left the whole question of alternative action for 1942 undecided.
Informed of the deadlock by Marshall, Roosevelt sent additional instructions to his representatives in London, directing again that an agreement on an operation for 1942 be reached. This message specifically instructed the American delegation to settle with the British on one of five projects: (1) a combined British-American operation in North Africa (either Algeria or Morocco or both); (2) an entirely American operation against French Morocco (the original GYMNAST); (3) a combined operation against northern Norway (JUPITER); (4) the reinforcement of the British Eighth Army in Egypt; (5) the reinforcement of Iran.
The American military chiefs, Marshall and King, now knew that SLEDGEHAMMER was dead, for no cross-Channel attack was possible in the face of British objections and without the President's strong support. Preferring the occupation of French North Africa with all its shortcomings to a campaign in the Middle East or Norway, they reluctantly accepted GYMNAST. On 24 July a carefully worded agreement, drawn up by Marshall and known as CCS 94, was accepted by the Combined Chiefs of Staff. It contained the important condition that the CCS would postpone until mid-September final decision on whether or not the North African operations should be undertaken. (The date 15 September was chosen because it was considered the earliest possible day on which the outcome in Russia could be forecast.) If at that time the Russians clearly faced a collapse that
Quotation from Brooke's diary, 23 July entry, in Bryant, Turn of the Tide, p. 344. Msg, President to Hopkins, Marshall, and King, 23 Jul 42, WDCSA 381, Sec. I; Matloff and Snell, Strategic Planning, 1941-1942, p. 278; Howe, Northwest Africa, p. 13. For War Department views on Middle East operations see OPD study, 15 Jul 42, sub: Comparison of Opn GYMNAST With Opns Involving Reinforcements of Middle East. Exec 5, Item 1. CCS 34th Mtg, 30 Jul, ABC 381 (7-25-42) Sec. 1.
would release so many German troops that a cross-Channel attack in the spring of 1943 would be impractical, the North African invasion would be launched sometime before 1 December. Meanwhile, planning for ROUNDUP was to continue while a separate U.S. planning staff would work with the British on the North African project, now renamed TORCH.
The door to later reconsideration of the agreement, deliberately left open in CCS 94 by General Marshall in order to save the ROUNDUP concept, did not remain open long. In a message to the President on 25 July, Harry Hopkins urged an immediate decision on TORCH to avoid "procrastination and delays." Without further consulting his military advisers, Roosevelt chose to assume that a North African campaign in 1942 had been definitely decided upon and at once cabled his emissaries that he was delighted with the "decision." At the same time he urged that a target date not later than 30 October be set for the invasion. By ignoring the carefully framed conditions in CCS 94 and in suggesting a date for launching TORCH, the President actually made the decision. In so doing, he effectively jettisoned ROUNDUP for 1943, though he probably did not fully realize it at the time.
Although Marshall must have realized the fatal impact of Roosevelt's action on ROUNDUP he was reluctant to view it as one that eliminated the conditions stipulated in CCS 94. At the first meeting of the Combined Chiefs of Staff held after his return to Washington he therefore refrained from accepting the "decision" as final and pointed out that the mounting of TORCH did not mean the abandonment of ROUNDUP. At the same time, he recognized that a choice between the two operations would have to be made soon "because of the logistic consideration involved," particularly the conversion of vessels to combat loaders which, according to a "flash estimate" of the Navy, would require ninety-six days. Nor was Admiral King willing to admit that the President had fully decided to abandon ROUNDUP as well as SLEDGEHAMMER in favor of TORCH.
If Marshall and King entertained any hope of getting the President to reopen the issue and make a definite choice between ROUNDUP and TORCH they were doomed to disappointment. Instead, on 30
Memo by CCS, 24 Jul 42, sub: Opns in 42 43, circulated as CCS 94, ABC 381 (25 Jul 42). For details, see the treatment of CCS 94 and its interpretation in Matloff and Snell, Strategic Planning, 1941-1942. Sherwood, Roosevelt and Hopkins, p. 611. Msg, President to Hopkins Marshall, and King, 25 Jul 42, WDCSA 381, Sec. 1. This view is also expressed in a personal letter, Marshall to Eisenhower, 30 Jul 42, GCM file under Eisenhower, D. D. Min, 34th Mtg CCS, 30 Jul 42, ABC 381 (7-25-42) Sec. 1.
July, at a meeting at the White House with the Joint Chiefs of Staff, the President stated that "TORCH would be undertaken at the earliest possible date" but made no comment on its possible effect on ROUNDUP. The next day his decision on TORCH was forwarded to the British Chiefs of Staff and to General Eisenhower.
However loath the President's military advisers were to sidetrack plans for the direct invasion of the Continent and accept a secondary project in its place, an attack on French North Africa, alone among the operations considered, met strategic conditions for joint Anglo-American operations in 1942 on which both Churchill and Roosevelt could agree. Without the wholehearted support of the two top political leaders in the United States and Great Britain, no combined operation could be mounted. In short, TORCH from the beginning had support on the highest political level in both countries, an advantage never enjoyed by either ROUNDUP or SLEDGEHAMMER.
The decision to invade North Africa restored Anglo-American cooperative planning, which had been showing signs of serious strain. It was now on a sound working basis that permitted the establishment of rights and priorities with relentless determination. What was still needed was a final agreement between Washington and London on the size, direction, and timing of the contemplated operation. Such an agreement was not easy to reach. The big question to be decided was where the main effort of the Allies should be made and when. On this issue Washington and London were at first far apart.
The strategic planners in Washington, mindful of the dangers in French opposition, hostile Spanish reaction, and a German counterstroke against Gibraltar with or without the support of Spain, proposed making the main landings outside the Mediterranean on the Atlantic coast of French Morocco. Troops would take Casablanca and adjacent minor ports, seize and hold the railroad and highways to the east as an auxiliary line of communications, secure all the approaches to Gibraltar, and consolidate Allied positions in French Morocco before moving into the Mediterranean. This, the planners estimated, would take about three months. The plan was a cautious one,
Memo, Maj Gen Walter B. Smith for JCS, 1 Aug 42, sub: Notes of Conf Held at the White House at 8:30 PM, 30 Jul 42, OPD Exec 5, Item 1, Tab 14. Before leaving London, Marshall informed Eisenhower that he would be in command of the TORCH operation, if and when undertaken, in addition to being Commanding General ETOUSA. This appointment was later confirmed by the CCS. For an extended account of this subject see, Leighton and Coakley, Global Logistics 1940-1943, pp. 427-35.
dictated primarily by the fear that the Strait of Gibraltar might be closed by the Germans or the Spanish, acting singly or together.
The bold course, advocated by the strategic planners in London, including many Americans working with the British, was to strike deep into the Mediterranean with the main force at the outset and then, in co-ordination with the British Eighth Army moving west from Egypt, seize Tunisia before the Germans could reinforce the threatened area. They viewed with feelings approaching consternation the cautious American strategy that would waste precious months in taking ports and consolidating positions over a thousand miles distant from Tunisia, whose early occupation they believed to be vital to the success of TORCH. Should the Germans be permitted to establish themselves firmly in that province it was feared that they might, because of shorter lines of communications and land-based air power, be able to hold out indefinitely, thus preventing the extension of Allied control to the strategic central Mediterranean.
The proponents of the inside approach also stressed the relative softness of the Algerian coastal area as compared with that around Casablanca. In their view Algeria with its favorable weather and tide conditions, more numerous and better ports, and proximity to Tunisia seemed to have every advantage over western Morocco as the main initial objective. They believed that even in the matter of securing communications it would be safer to move swiftly and boldly through the Strait of Gibraltar and seize ports along the Algerian coast as far east as Philippeville and Bone. Strong determined action there would cow the Spanish and make them hesitate to permit German entry into Spain for a joint attack on Gibraltar. On the other hand they contended that an unsuccessful attack in the Casablanca area, where operations were extremely hazardous because of unfavorable surf conditions four days out of five, would almost certainly invite Spanish intervention.
For weeks arguments for and against both strategic concepts were tossed back and forth across the Atlantic in what has aptly been called a "transatlantic essay contest." Meanwhile preparations for the attack languished. A logical solution to the problem was to reconcile the conflicting views by combining both into a single plan. This, General Eisenhower, who had been designated to command the operation
Ltr, Prime Minister to Harry Hopkins, 4 Sep 42, as quoted in Churchill, Hinge of Fate, p. 539; Bryant, Turn of the Tide, pp. 401-02. For an extended account see Leighton and Coakley, Global Logistics, 1940-1943, pp. 417-24.
before Marshall left London, attempted to do in his first outline plan of 9 August when he proposed approximately simultaneous landings inside and outside the Mediterranean, the first strong and the latter relatively weak.
Almost immediately the plan struck snags in the form of insufficient naval air support and assault shipping. Shortly after it was submitted, both the American and the British Navies suffered severe losses in naval units, particularly in aircraft carriers. Since close land-based air support would be negligible, confined to a single airfield at Gibraltar under the domination of Spanish guns, carriers were necessary to protect assault and follow-up convoys for the operation. In view of the recent naval losses and needs elsewhere in the world, finding them would take time. The U.S. Navy quickly let it be known that it had no carriers immediately available to fill the void and was unwilling to commit itself on when they would be. This meant that the burden of supplying seaborne air protection would probably fall on the British.
Equally if not more important in determining the size and timing of the landings was the availability of assault shipping. Most of the American APA's (assault troop transports) were tied up in the Pacific where they were vitally needed. To transport the twelve regimental combat teams, envisioned as the force needed to make the three landings, would require 36 APA's and 9 to 12 AKA's (attack cargo transports); and as yet the program for converting conventional transports to assault transports had hardly begun. On 2 August the Navy estimated that sufficient assault shipping, trained crews, and rehearsed troops for an operation of the size originally contemplated would not be ready for landings before 7 November. The British were against postponing the operation and, to gain time, were willing to skimp on the training and rehearsals of assault units and boat crews. The President sided with them on an early attack and on 12 August directed Marshall to try for a 7 October landing date even if it meant the reduction of the assault forces by two thirds. It now fell to Eisenhower and his planning staff to rearrange their plan in the light of available resources and under the pressure for quick action.
In his second outline plan of 21 August Eisenhower set 15 Oc-
Draft Outline Plan (Partial) Opn TORCH, Hq ETOUSA, 9 Aug 42, ABC 381 (7-25-42) 4A. The United States Navy lost a carrier and several cruisers in the Guadalcanal operation; the Royal Navy, one aircraft carrier sunk and one damaged in trying to reinforce Malta. Conversion had begun on ten small vessels taken off the BOLERO run. Bryant, Turn of the Tide, p. 400.
tober as a tentative date for the invasion and proposed dropping the Casablanca operation entirely and concentrating on the capture of Oran in Algeria. That having been accomplished, he would move in two directions, eastward into Tunisia and southwest across the mountains into French Morocco. This plan seemed to ignore the danger to the Allies' line of communications from the direction of both Gibraltar and Spanish Morocco should Spain join the Axis Powers. It also failed to take sufficiently into account the shortage in naval escorts and the logistical problems involved in funneling all the men, equipment, and supplies needed to seize Algiers, French Morocco, and Tunisia into the port of Oran, whose facilities might not be found intact. The complicated convoy arrangements for the assault, follow-up, and build-up phases of the operation that would have to be made were enough by themselves to doom the plan in the eyes of the military chiefs in Washington as too risky.
In response to continuous pressure from the President and the Prime Minister for an early assault, Eisenhower advanced D Day from 15 October to 7 October, when the moon would be in a phase that would facilitate surprise. This date he viewed as the earliest practical time for the beginning of the invasion. But few informed leaders believed that this date could be met. Admiral King considered 24 October more likely, and even the British planners, who were consistently more optimistic about an early D Day than their American colleagues, admitted that meeting the proposed date would require a "superhuman effort."
The most serious problem confronting planners on both sides of the Atlantic continued to be the scarcity of assault shipping. The Navy's original estimate of fourteen weeks as the time required to convert conventional ships to assault vessels, train crews, rehearse troops in embarkation and debarkation, load troops and cargo, and sail from ports of embarkation in the United States and the United Kingdom to destination remained unchanged. This meant that 7 November, the date given in the original estimate, would be the earliest possible day for the assault to begin. The Navy might also have pointed to the shortage of landing craft for transporting tanks and other assault vehicles as an argument against an early D Day. LST's were under construction at the time but none were expected to be available before October or November.
Msg, Eisenhower to AGWAR, 22 Aug 42, copy in ABC 381 (7-25-42), Sec. 4-B. Msg, King to Marshall, 22 Aug 42, sub: Sp Opns, OPD Exec 5, Item 1; Msg 236, COS to Jt Staff Mission, 4 Aug 42, Exec 5, Item 2. No LST's actually became available in time for the initial landings but three "Maracaibos," forerunners of the LST's, were.
Nevertheless Roosevelt and Churchill, impatient of delay, continued to insist on an early invasion date. It was such pressure in the face of shipping, equipment, and training deficiencies that was responsible for Eisenhower's 21 August proposal to limit drastically the size of the assault and confine it entirely to the Mediterranean.
The plan found few supporters even among those who made it. Eisenhower himself regarded it as tentative and the date of execution probably too early because as yet little progress had been made in planning the force to be organized in the United States and not enough was known about scheduling convoys, the availability of air and naval support, or the amount of resistance that could be expected.
So widely varying were the reactions to the plan in Washington and London that a reconciliation of views appeared impossible. Fortunately for the success of the operation, a spirit of compromise developed. By 24 August the British military chiefs were willing to moderate their stand for an early invasion somewhat and even to accept the idea of a Casablanca landing, provided the scope of TORCH was enlarged to include an attack on Philippeville, a port close to Tunisia. Their willingness to make concessions, however, was contingent on a greater naval contribution by the United States. The proposal was unacceptable to the American Joint Chiefs of Staff who now used the 21 August plan to bolster their original argument that the main blow should be struck in the west, outside the Mediterranean, at or near Casablanca. They would accept an assault on Oran along with one on Casablanca but none against ports farther to the east. They were also willing to adjust Eisenhower's directive as he had requested, bringing his mission more in line with his resources, but they stubbornly opposed any increase in the U.S. Navy's contribution which would weaken the fleet in critical areas elsewhere in the world.
Such was the status of TORCH planning when Churchill returned from Moscow where he had been subjected to Stalin's taunts because of the failure of the Western Allies to open up a second front on the Continent. Only by playing up the military advantages of TORCH and giving assurances that the invasion would begin no later than 30 October had he been able to win the Soviet leader's approval of the operation. Thus committed, it is no wonder that Churchill was alarmed at the turn matters had taken during his absence from London. With characteristic vigor he at once sprang into action to restore the strategic concept of TORCH to the shape he believed essential to success.
Matloff and Snell, Strategic Planning, 1941-1942, p. 289. Bryant, Turn of the Tide, p. 403. Churchill, Hinge of Fate, pp. 484-86; Bryant, Turn of the Tide, pp. 373-74.
In a series of messages to Roosevelt, he urged the establishment of a definite date for D Day, and argued eloquently for an invasion along the broadest possible front in order to get to Tunisia before the Germans. "The whole pith of the operation will be lost," he cabled, "if we do not take Algiers as well as Oran on the first day." At the same time he urged Eisenhower to consider additional landings at Bone and Philippeville. He was confident that a foothold in both places could be attained with comparative ease and expressed the opinion that a strong blow deep inside the Mediterranean would bring far more favorable political results vis-a-vis Spain and the French in North Africa than would an assault on Casablanca. He was not opposed to a feint on that port but he feared making it the main objective of the initial landings. Because of the dangerous surf conditions, he argued, "Casablanca might easily become an isolated failure and let loose upon us ... all the perils which have anyway to be faced." As to the time of the attack, he would launch it by mid-October at the latest. To meet that target date, he believed naval vessels and combat loaders could be found somewhere and outloading speeded up.
Roosevelt, equally unwilling to accept a delay, proposed in his reply two simultaneous landings of American troops, one near Casablanca, the other at Oran, to be followed by the seizure of the road and rail communications between the two ports and the consolidation of a supply base in French Morocco that would be free from dependence on the route through the Strait of Gibraltar. He appreciated the value of three landings but pointed out that there was not currently on hand or in sight enough combat shipping and naval and air cover for more than the two landings. He agreed however that both the Americans and the British should re-examine shipping resources "and strip everything to the bone to make the third landing possible." In his reply Roosevelt also conveyed his views on the national composition of the forces to be used in the initial landings within the Mediterranean. Recent intelligence reports from Vichy and North Africa had convinced him that this was a matter of such great political import that the success or failure of TORCH might well depend on the decision made. These reports indicated that in the breasts of most Frenchmen in North Africa an anti-British sentiment still rankled in consequence of the evacuation at Dunkerque, the de-
Churchill, Hinge of Fate, p. 528. Ibid., p. 530. Msg 1511, London to AGWAR, 26 Aug 42, ABC 381 (7-25-42) Sec. 4 B. Churchill, Hinge of Fate, p. 531. Msg, Roosevelt to Churchill, 30 Aug 42, Exec 5, Item 1; Churchill, Hinge of Fate, p. 532.
struction visited on the French fleet at Mers-el-Kebir, British intervention in the French dependencies of Syria and Madagascar, and the abortive attack by British-sponsored de Gaulle forces on Dakar. Both the President and his advisers were convinced that the strength of this sentiment was such that the inclusion of British troops in the assault was extremely dangerous. Roosevelt therefore insisted on confining the initial landings to American troops.
Churchill did not share the view that Americans "were so beloved by Vichy" or the British "so hated" that it would "make the difference between fighting and submission." Nevertheless he was quite willing to go along with the President's contention that the British should come in after the political situation was favorable, provided the restriction did not compromise the size or employment of the assault forces. At the same time he appropriately pointed out that the American view on the composition of the assault would affect shipping arrangements and possibly subsequent operations. Since all the assault ships would be required to lift purely American units, British forces would have to be carried in conventional vessels that could enter and discharge at ports. This necessarily would delay follow-up help for some considerable time should the landings be stubbornly opposed or even held up.
As a result of the transatlantic messages between the two political leaders, a solution to the impasse of late August gradually but steadily began to emerge. On 3 September, Roosevelt, who had promised to restudy the feasibility of more than two landings, came up with a new plan in which he proposed three simultaneous landings-at Casablanca, Oran, and Algiers. For Casablanca he proposed a force of 34,000 in the assault and 24,000 in the immediate follow-up (all United States); for Oran, 25,000 in the assault and 20,000 in the immediate follow-up (all United States); for Algiers, 10,000 in the initial beach landing (all United States) to be followed within an hour by British forces. All British forces in the follow-up, the size of which would be left to Eisenhower, would debark at the port of Algiers from non-combat loaded vessels. All the American troops for the Casablanca landing were to come directly from the United States; all those for Oran and Algiers, from the American forces in the United Kingdom. As for shipping, the United States could furnish enough combat load-
AFHQ Commander in Chief Despatch, North African Campaign, p. 4. These views of Churchill are not in accord with the reports from British intelligence agents that Churchill showed Harry Hopkins in July when he was urging the United States to accept a North African offensive. Nor are they the same as those expressed in his message of 12 July to Dill. Sherwood, Roosevelt and Hopkins, pp. 610-11; Msg, Churchill to Field Marshal Dill, 12 Jul 42, ABC 381 (7-25-42) Sec. 4. Churchill, Hinge of Fate, p. 534.
ers, ready to sail on 20 October, to lift 34,000 men and sufficient transports and cargo vessels to lift and support 52,000 additional troops. Total available shipping under U.S. control, he estimated, was enough to move the first three convoys of the proposed Casablanca force. This did not include either the American transports, sufficient to lift 15,000 men, or the nine cargo vessels in the United Kingdom that had previously been earmarked for the TORCH operation. Under the President's proposal, the British would have to furnish (1) all the shipping (including combat loaders) for the American units assigned to take Oran and Algiers except the aforementioned American vessels in the United Kingdom, (2) the additional British troops required for the Algiers assault and follow-up, and (3) the naval forces for the entire operation, less those that the United States could furnish for the Casablanca expedition.
Churchill replied to the American proposal at once, suggesting only one modification of importance, a shift of ten or twelve thousand troops from the Casablanca force to that at Oran in order to give more strength to the inside landings. Unless this was done, he pointed out, the shortage in combat loaders and landing craft would rule out an assault on Algiers.
Roosevelt consented to a reduction of approximately 5,000 men in the Casablanca force and expressed the belief that this cut, along with a previous one made in the Oran force, would release enough combat loaders for use at Algiers. Whatever additional troops were needed for that landing the President believed could be found in the United Kingdom. To these proposals the Prime Minister agreed on 5 September.
The scope and direction of the landings were now decided; the "transatlantic essay contest" was over. Only the date of the invasion remained to be settled. The planning staffs in both Washington and London, after six weeks of frustrating uncertainty, could now breathe a sigh of relief and proceed with definite operational and logistical preparation without the harassing fear that the work of one day would be upset by a new development in strategy the next.
The final decision represented a compromise on the conflicting strategic concepts of Washington and London. It sought to minimize the risks to the line of communications involved in putting the full strength of the Allied effort inside the Mediterranean without giving up hope of gaining Tunisia quickly. The plan to make initial landings east of Algiers at Philippeville and Bone, advocated by the Brit-
Msg 144, Prime Minister to Roosevelt, 5 Sep 42, Exec 5, Item 1; Churchill, Hinge of Fate, Ch. VII; Bryant, Turn of the Tide, p. 403.
ish, was abandoned but the assault on Algiers was retained at the expense of the forces operating against Casablanca and Oran. The political desirability of an all-American assault, though probably still valid, was compromised to the extent that British forces were to be used at Algiers in the immediate follow-up and for the eastward push into Tunisia after a lodgment had been attained.
No date was set for the attack. The decision the Combined Chiefs left to Eisenhower who had a number of matters to consider in making it. Because of broad political and strategic reasons and the normal deterioration in weather conditions in the area of impending operations during the late fall, the earlier the landings, the better. The vital need for tactical surprise pointed to the desirability of a new-moon period. But in the final analysis D Day would be determined by the time needed to assemble and prepare necessary shipping, acquire naval escorts, equip American units in the United Kingdom, and train assault troops and landing craft crews in amphibious operations. By mid-September Eisenhower was sufficiently convinced that his logistical and training problems could be solved by late October and so he set 8 November for the attack.
His optimism that this date could be met was not shared by all his staff, particularly those acquainted with the tremendous logistical tasks that remained to be completed. More than the political leaders and strategic planners they realized that no task forces of the size contemplated could be fully equipped and shipped in the short time remaining, no matter how strongly imbued with a sense of urgency everyone concerned might be. If there was to be an invasion at all in November, they realized that the Allies would have to cut deeply into normal requirements and resort to considerable improvisation. Events were to prove that those who doubted the complete readiness to move on 8 November were correct.
Even in retrospect, it is debatable whether the decision to invade North Africa was the soundest strategic decision that could have been made at the time and under the existing circumstances. If there had to be an operation in the Atlantic area in 1942 that had a chance of success, few students of World War II will argue today that TORCH was to be preferred over SLEDGEHAMMER. The shortage of landing craft and other resources necessary to attain a lodgment in northwest Europe and to sustain it afterward was sufficient reason for the rejec-
CCS 103/3, 26 Sep 42, Sub: Outline Plan Opn TORCH. Leighton and Coakley, Global Logistics, 1940-1943, p. 424. Memo, Col Hughes, DCAO AFHQ, for Gen Clark, 14 Sep 42, sub: Estimate of the Supply and Administrative Aspects of Proposed Operations, original in European Theater of Operations file, USFET AG 400, Supplies and Equipment, Vol. V.
tion of SLEDGEHAMMER. There was little real doubt but that TORCH would siphon off the necessary men and equipment required for ROUNDUP in 1943. This the American military leaders saw clearly as did the British, although the latter never admitted it openly in conference. The real question therefore remains: Was it wise to embark on an operation in the northwest African area in 1942 at the expense of a possible direct attack against the Continent in 1943? The British as a group and some Americans, notably the President, believed it was; most of the American military leaders and strategic planners thought otherwise.
The preference of the British for TORCH undoubtedly stemmed fundamentally from their opposition to an early frontal assault on Festung Europa. Their inclination for a peripheral strategy was based in part on tradition, in part on previous experience in the war, in part on the desirability of opening up the Mediterranean, and in part on the need of bolstering their bastions in the Middle East. More than the Americans they knew what it meant to try to maintain a force in western Europe in the face of an enemy who could move swiftly and powerfully along inner overland lines of communications. Having encountered the force of German arms on the Continent earlier in the war, they naturally shied away from the prospect of meeting it head on again until it had been thoroughly weakened by attrition.
The American military leaders, on the other hand, less bound by tradition and confident that productive capacity and organization would give the Allies overwhelming odds within a short time, believed the war could be brought to an end more quickly if a main thrust was directed toward the heart of the enemy. In their opinion the enemy, softened by heavy and sustained preliminary bombardment from the air, would become a ready subject for such a thrust by the summer of 1943. They also believed that an early cross-Channel attack was the best way to help the Russians whose continued participation in the war was a matter of paramount importance. They did not want SLEDGEHAMMER any more than the British, but fought against scrapping it before Russia's ability to hold out was certain. They opposed entry into North Africa because they did not consider it an area where a vital blow could be struck and because they wanted to save ROUNDUP. Churchill, Brooke, and others may assert, as they do, that no cross-Channel attack would have been feasible in 1942 or in 1943 because the Allies lacked the means and the experience in conducting amphibious warfare, and because the enemy was too strong in western Europe. Marshall and his support can contend with equal vigor that had not TORCH and the preparations for subsequent operations in the Mediterranean drained off men and resources, depleted the
reserves laboriously built up in the United Kingdom under the BOLERO program, wrecked the logistical organization in process of being established there, had given the enemy an added year to prepare his defenses, a cross-Channel operation could have been carried out successfully in 1943 and the costly war brought to an end earlier. Whose strategy was the sounder will never be known. The decision that was made was a momentous one in which political and military considerations were so intermingled that it is difficult to determine which carried the greater weight. For that reason if for no other, it will be the subject of controversy as long as men debate the strategy of World War II.
George F. Howe, Northwest Africa: Seizing the Initiative in the West (Washington, 1957), in UNITED STATES ARMY IN WORLD WAR II, covers in detail the operations that led to victory in Tunisia in May 1943. The Navy story is related by Samuel Eliot Morison, The Battle of the Atlantic, September 1939-May 1943, Vol. I (Boston: Little, Brown and Company, 1950), and Operations in North African Waters, October 1942-June 1943, Vol. II (Boston: Little. Brown and Company, 1950), History of United States Naval Operations in World War II. Books that deal with the TORCH decision are: Maurice Matloff and Edwin M. Snell, Strategic Planning for Coalition Warfare 1941-1942 (Washington, 1953) and Richard M. Leighton and Robert W. Coakley, Global Logistics and Strategy, 1940-1943, in UNITED STATES ARMY IN WORLD WAR II; Robert E. Sherwood, Roosevelt and Hopkins: An Intimate History (New York: Harper & Brothers, 1948); Henry L. Stimson and McGeorge Bundy, On Active Service in Peace and War (New York: Harper & Brothers, 1948); Winston S. Churchill, The Hinge of Fate (Boston: Houghton Mifflin Company, 1950); Arthur Bryant, The Turn of the Tide (Garden City, N.Y.: Doubleday and Company, 1957).
LEO J. MEYER, Historian with OCMH since 1948. B.A., M.A., Wesleyan University; Ph.D., Clark University. Taught: Clark University, Worcester Polytechnical Institute, New York University. Deputy Chief Historian,
OCMH. Troop Movement Officer, New York Port of Embarkation; Chief of Movements, G-4, European theater; Commanding Officer, 14th Major Port, Southampton, England; Secretariat, Transportation Board. Legion of Merit, Bronze Star, O.B.E. Colonel, TC (Ret.). Author: Relations Between the United States and Cuba, 1898-1917 (Worcester, 1928); articles in Encyclopedias Americana and Britannica, Dictionary of American Biography, Dictionary of American History, and various professional journals. Co-author: The Strategic and Logistical History of the Mediterranean Theater of Operations, to be published in UNITED STATES ARMY IN WORLD WAR II. | http://www.history.army.mil/books/70-7_07.htm | 13 |
87 | Excel-Formulas-manual help you to understand excel.
Shared by: RajaNomii
Excel Formulas & Functions Tips & Techniques Excel makes use of formulas (mathematical expressions that you create) and functions (mathematical expressions already available in Excel) to dynamically calculate results from data in your worksheets. Constructing a formula • To start entering a formula in a cell, click in that cell and then type the formula. Type return or tab to move to the next cell when you have finished entering the formula. • All formulas begin with the = symbol. • All formulas use the following mathematical operators: o * multiplication o / division o + addition o - subtraction • Formulas containing numbers will produce a result that will not ever change. o The formula =3*8 produces the result 24. • However, a formula containing cell references produces a result that may change if the data in those cells changes. o The formula =C3+D3 will produce a result based upon the data in cells C3 and D3. 1 Using the order of mathematical operations All formulas utilize the standard mathematical order of operations when calculating results. • If a part of a formula is in parentheses, that part will always be calculated before the rest of the formula. o The formula =(B2*C2)-A3 will subtract data in cell A3 from the multiplied product of cells B2 and C2. • After expressions in parentheses, Excel will calculate your formula using the math operators in the following order o Multiplication o Division o Addition o Subtraction • In other words, Excel will begin to parse your formula starting with any multiplication and division. Once this is complete, Excel will add and subtract the remainder of your formula. Inserting a function Each of Excel’s functions is a predefined formula, and most act on a range of cells that you select. (Excel refers to each range of cells in the function as an argument.) Although a few functions do not use arguments, most have one or more and some complex functions use as many as 3 or 4 arguments. Excel provides a Paste Function window to simplify the process of inserting functions into your worksheets and eliminate the need to remember the exact syntax of each function. • Select the cell into which you want to insert a function. • From the Insert menu, choose Function. Alternatively, you can click on the Paste Function button on the standard toolbar. 2 • In the Paste Function window, click on the function category containing the function you want. Next, click on the name of the function you wish to insert. Once you have selected a function, click OK. • Next, Excel will display a syntax window to help you construct the function. From this window, first click on the collapse button (labeled with a red arrow) to the right of the box labeled Number1 or Value1 (depending on the function you chose to insert). • Drag to select the range of cells to be included as the function’s first argument. Type enter. • To insert additional arguments into the function, follow this process using the other Number boxes in the syntax window. • When you have finished, click OK in the syntax window to insert the function into your worksheet. 3 Searching for help on functions From the Paste Function window, Excel offers help on a function that you have selected as well as help finding the function that will perform a task you describe. Getting help on a specific function • To get help on a specific function, click on the function’s name at the right of the Paste Function window and then click on the Help button at the window’s lower left. • Click on Help with this feature in the yellow callout box that appears, and then click on Help on selected function in the callout box that appears next. Excel will display its Help topic on the function that you selected. 4 Finding a function • From the Paste Function window, click on the Help button in the lower left corner, and then click on Help with this feature in the yellow callout box that appears. • In the box at the bottom of the next yellow callout box, type a description of the calculation you wish to perform. Click on Search. • Excel will display a list of functions that may meet your criteria in the Recommended list at the right of the Paste Function window. To learn more about one of these functions, click on its name and then click Help on selected function. 5 Using functions with external data Although most functions utilize data on the same worksheet, you can also use data on other worksheets or in other. In this way, you can consolidate data from multiple sources into an executive summary. • Before beginning, open any workbooks that contain data to be used in your function. • Select the cell into which you want to insert a function. • Open the Paste Function window, click on the name of the function you wish to insert, and click OK. • Collapse the gray syntax window, using the collapse button (labeled with a red arrow). • Navigate so that the worksheet or workbook containing your data is visible on your screen. • Drag to select the range of cells to be included as the function’s first argument. Type enter. • Excel will insert the reference to the cells that you selected (including the name of the external worksheet and workbook) into your function. Click OK to finish your function. Tip: Naming external data ranges When you select a range of cells on an external worksheet, Excel will add the name of your external worksheet to the cell range reference that it creates. In the example above, Excel refers to the selected cells (from the expenses worksheet) as expenses!D5:D15. If those cells were in an external workbook, the name of that workbook would preface the worksheet name. In general, references to external data look like this: [workbook name.xls]worksheet name!C1:C34 6 Tip: Using external data in formulas Use this technique to insert external data in your formulas, as well. When creating a formula, navigate to the external worksheet containing your and click to select that cell’s data for use in your formula. Excel will automatically insert the correct cell reference (including the worksheet and workbook names) into your formula. Function Cheat Sheet Functions Description Syntax Example Functions without arguments Rand Generates a =Rand() =Rand()*3; random number (generates a random between 0 and 1 number between 0 and 3) Pi Generates the =Pi() =Pi() value of pi to 14 decminal places Functions with 1 argument Average Produces the =average(Cx:Cy) =average(C1:C12) average of the data in a range of cells Max Produces the =max(Cx:Cy) =max(C1:C12) greatest value in a column of cells Hour Returns the =hour(Cx) =hour(A34) number of hours =hour(time) =hour(1:35 PM) past midnight for the specified time Minute Returns the =minute (Cx) =minute(A34) number of minutes =minute (time) =minute(1:35 PM) past the hour for the specified time Sqrt Produces the =sqrt(Cx) =sqrt(Cx) square root of its =sqrt(number) =sqrt(9) argument Functions with 2 arguments Round Rounds a value to =round(Cx, number) =round(A22, 2); a specified digit to =round (value, (rounds to 2 decimal the left or right of number) places) the decimal point =round (123.45,0); (rounds to 0 decimal places) Countif Counts the number =countif(Cx:Cy, =countif(C1:C12, of cells in a range “>criteria”) “>150”) that meet a specified criteria 7 Functions Description Syntax Example Functions with 3 arguments If Provides the =if(condition,“a =if(A1>0,”yes”,”no); if the value basis for a nswer1”, of A1 is positive, Excel returns decision; if ”answer2) the answer “yes”; otherwise, condition is Excel returns the answer “no” met, one answer is returned; if condition is not met, another answer is returned Sumif Produces the =sumif(Cx:Cy,c =sumif(C1:C12, >150,D1:D12) sum of the cells riterion,Dx:Dy) in a range if Cx:Cy: the any cells in a range of cells to second range meet the meet a criterion selection Dx:Dy: the criterion range of cells from which sum will be calculated Functions with one or more arguments And Returns a =and(condition =and(A1>0,A2>1,A3>3) logical TRUE 1,condition2, response if all condition3…) of its arguments are true; otherwise returns false Or Returns a =or(condition1, =or(A1>70,A1<80) logical TRUE condition2, response if one condition3…) or more arguments are true; otherwise returns false Sum Totals the data =sum(Cx:Cy,Dx =sum(C1:C12) in a column of ,Dy) cells Referencing a range of cells…. • In other worksheets: worksheet!A1:D4 • In other workbook: c:\my documents\[test.xls]Sheet1!A2:A5 • Across several worksheets: sheet1:sheet5!A12 8
Other docs by RajaNomii
Excel Formulas A Quick List (PDF)
Views: 45 | Downloads: 1
Views: 11 | Downloads: 1 | http://www.docstoc.com/docs/120037420/Excel-Formulas-manual | 13 |
69 | The functions described here perform various operations on vectors and matrices.
Do a vector concatenation; this operation is written ‘x | y’ in a symbolic formula. See Building Vectors.
Return the length of vector v. If v is not a vector, the result is zero. If v is a matrix, this returns the number of rows in the matrix.
Determine the dimensions of vector or matrix m. If m is not a vector, the result is an empty list. If m is a plain vector but not a matrix, the result is a one-element list containing the length of the vector. If m is a matrix with r rows and c columns, the result is the list ‘(r c)’. Higher-order tensors produce lists of more than two dimensions. Note that the object ‘[[1, 2, 3], [4, 5]]’ is a vector of vectors not all the same size, and is treated by this and other Calc routines as a plain vector of two elements.
Abort the current function with a message of “Dimension error.” The Calculator will leave the function being evaluated in symbolic form; this is really just a special case of
Return a Calc vector with args as elements. For example, ‘(build-vector 1 2 3)’ returns the Calc vector ‘[1, 2, 3]’, stored internally as the list ‘(vec 1 2 3)’.
Return a Calc vector or matrix all of whose elements are equal to obj. For example, ‘(make-vec 27 3 4)’ returns a 3x4 matrix filled with 27's.
If v is a plain vector, convert it into a row matrix, i.e., a matrix whose single row is v. If v is already a matrix, leave it alone.
If v is a plain vector, convert it into a column matrix, i.e., a matrix with each element of v as a separate row. If v is already a matrix, leave it alone.
Map the Lisp function f over the Calc vector v. For example, ‘(map-vec 'math-floor v)’ returns a vector of the floored components of vector v.
Map the Lisp function f over the two vectors a and b. If a and b are vectors of equal length, the result is a vector of the results of calling ‘(f ai bi)’ for each pair of elements ai and bi. If either a or b is a scalar, it is matched with each value of the other vector. For example, ‘(map-vec-2 'math-add v 1)’ returns the vector v with each element increased by one. Note that using ‘'+’ would not work here, since
defmathdoes not expand function names everywhere, just where they are in the function position of a Lisp expression.
Reduce the function f over the vector v. For example, if v is ‘[10, 20, 30, 40]’, this calls ‘(f (f (f 10 20) 30) 40)’. If v is a matrix, this reduces over the rows of v.
Reduce the function f over the columns of matrix m. For example, if m is ‘[[1, 2], [3, 4], [5, 6]]’, the result is a vector of the two elements ‘(f (f 1 3) 5)’ and ‘(f (f 2 4) 6)’.
Return the nth row of matrix m. This is equivalent to ‘(elt m n)’. For a slower but safer version, use
mrow. (See Extracting Elements.)
Return the nth column of matrix m, in the form of a vector. The arguments are not checked for correctness.
Return a copy of matrix m with its nth row deleted. The number n must be in range from 1 to the number of rows in m.
Flatten nested vector v into a vector of scalars. For example, if v is ‘[[1, 2, 3], [4, 5]]’ the result is ‘[1, 2, 3, 4, 5]’.
If m is a matrix, return a copy of m. This maps
copy-sequenceover the rows of m; in Lisp terms, each element of the result matrix will be
eqto the corresponding element of m, but none of the
conscells that make up the structure of the matrix will be
eq. If m is a plain vector, this is the same as
Exchange rows r1 and r2 of matrix m in-place. In other words, unlike most of the other functions described here, this function changes m itself rather than building up a new result matrix. The return value is m, i.e., ‘(eq (swap-rows m 1 2) m)’ is true, with the side effect of exchanging the first two rows of m. | http://www.gnu.org/software/emacs/manual/html_node/calc/Vector-Lisp-Functions.html | 13 |
69 | This unit examines regular tessellations, that is, tessellations that can be made using only one type of regular polygon, and semi-regular tessellations, where more than one type of regular polygon is involved. Students are required to investigate what properties tessellating shapes must have in order to cover the plane with no gaps or overlaps.
- create regular and semi-regular tessellations of the plane
- demonstrate why a given tessellation will cover the plane
Tessellations are frequently found in kitchen and bathroom tiles and lino. You can see them in the pattern on carpets and decorative patterns on containers and packaging. Tessellations are a neat and symmetric form of decoration. They also provide a nice application of some of the basic properties of polygons.
To be able to fully understand tessellations using regular polygons, you need to know about their symmetry and about the size of their interior angles. All of the facts that you need to know are accessible to Level 4 students. This unit takes children through the steps that they need in order to establish that there are only three regular polygons that tile the plane. This unit thus follows on from Keeping in Shape from Level 3, where regular tessellations are first discussed.
Moving on from here, the children can consider semi-regular tilings. All that they need to know here is how to sum the interior angles of various regular polygons to 360° . The rest is up to their imagination.
- Show the students a large cut out equilateral triangle. Mark the vertices (corners) of the triangle with different colours. Say, "I am going to tear off the corners of this triangle and place them around this point (draw point on board). What do you think will happen?" Students may have encountered this before but let them guess what they think will happen. Tear off the corners and place them about the point to confirm that a half turn (or 180°) is created.
- Ask them whether they think this will work for any triangle, no matter what shape it is (isosceles, scalene, obtuse). Is there a way to check this without tearing off the corners? (Measure the interior (inside) angles to see if they total 180º (a half turn).) Tell the students to make a variety of triangles and test this conjecture.
- If the sum of the interior angles of any triangle is the same - 180º, is it likely that the sum of the interior angles of any quadrilateral is the same? Ask them to predict what will happen if the corners of any quadrilateral are joined about a point. Have them cut out several quadrilaterals of different shapes to check their predictions. Remind them to mark the corners before tearing them off.
- Once it has been found that the sum of the interior angles of any quadrilateral is 360º (they form a full turn about a point), then students can investigate the sum of interior angles of other polygons by measuring with protractors or tearing corners. The results can be captured in the table below. Encourage the students to look for a pattern to predict what the next angular sum will be.
Number of Sides
Sum of Angles º
Interior Angle of Regular Polygon
Note that for each side that is added, the sum of the interior angles increases by 180º. This can be explained by the fact that the addition of a side creates another triangle within the shape, and that each triangle has an angular sum of 180º.
Quadrilateral (2 triangles) Pentagon (3 triangles) Hexagon (4 triangles)
Challenge the students to use their results to draw a regular triangle (equilateral), a regular quadrilateral (square), a regular pentagon, a regular hexagon, and a regular octagon. Regular means that all sides are the same length and all interior angles are equal.
- What size are the interior angles of the regular figures we have just been talking about? How can we find out?
For the regular hexagon with six vertices and an angular sum of 720º, we need to divide the sum by the number of angles to find each angle size. So, since 720 ÷ 6 = 120, each interior angle in a regular hexagon is 120°.
- Get the children to complete the table above for polygons with up to 12 sides. (The regular dodecagon (12-sided polygon) has an angular sum of 1800°, so each interior angle will be 150°.)
- Use a set of pattern blocks to show how equilateral triangles tessellate. Tessellate means that they cover the plane infinitely with no gaps or overlaps. Send the students away in groups with their own set of pattern blocks to explore what other tessellations can be discovered using shapes from the set. You may want them to record the tessellations using isometric dot paper.
After a period of exploration bring the class back together to share the tessellations. Note that there are three regular tessellations that can be found, that is tessellations involving use of the same regular polygon. These regular tessellations are 220.127.116.11.3.3 (six triangles about each point or vertex), 18.104.22.168 (four squares about each vertex), and 6.6.6 (three hexagons about each vertex).
- Focussing on the regular tessellations ask why it is that these patterns work without gaps or overlaps. You may need to remind the students of the angle measures they found in Getting Started. There are two key properties of the shapes involved in regular tessellations:
- side lengths are the same;
- the sum of angles meeting at each vertex is exactly 360° (a full turn).
- For example, in the case of the tessellation with squares (22.214.171.124), the side lengths are the same and the four angles of 90° add up to 360° .
Confirm that the "angles around a point" principle holds for the other tessellations that students have found.
- Remind the students of the table on angle sums (Copymaster 1) that contains the interior angle measurements for regular polygons. Ask: From the table could we have expected the triangles, squares, and hexagons to have tessellated by themselves? How? (Each of these shapes has interior angles that can be divided into 360° evenly.)
- Many other tessellations are possible with combinations of pattern block shapes. Get them to experiment with combinations of regular polygons. These tessellations are known as semi-regular tessellations. For example, from the table it looks like 2 squares and three triangles might fit together about a point because 90°+ 90°+ 60°+ 60°+ 60° = 360°.
- These might be arranged in different ways, e.g. square-triangle-triangle-square-triangle (126.96.36.199.3) and square-square-triangle-triangle–triangle (188.8.131.52.3). Try to produce all possible combinations using pattern blocks.
- Give the students Copymaster 2 that gives templates for the regular polygons up to the 12-sided shape (dodecagon). By stapling through the centre of each shape onto blank pages underneath students can make multiples of each regular polygon. Get them to use the table and the cut out shapes to find as many semi-regular tessellations as they can.
- Share the results as a class to see if all the possible semi-regular tessellations have been found. These are 184.108.40.206.4, 220.127.116.11, 18.104.22.168.4, 22.214.171.124.6, 4.8.8, 126.96.36.199, 3.12.12, 4.6.12 (eight possibilities). Students may wish to create a poster presenting their favourite semi-regular tessellations explaining why each combination of shapes can tessellate the plane. Other tessellations are possible using regular polygons if the constraint of each vertex having the same arrangement of shapes is removed. For example, hexagons, squares and triangles can be used in this way.
Note that with some vertices the arrangement is 188.8.131.52 and at others it is 184.108.40.206 if the shapes are read clockwise about each vertex.
Get the students to investigate what other arrangements can be found in this way. Copymaster 4 contains examples of such arrangements. Note that this embodies the commutative principle seen in the square-square-triangle-triangle-triangle arrangements that changing the order of the addends does not affect their sum, in this case 360°.
- Observation of the table might persuade them that tessellating combinations are not likely for the heptagon (7 sides), decagon (10 sides) and hendecagon (11 sides) since their interior angles measures will not yield combinations to 360°. Is it possible to show that there are only three regular tessellations?
- There are two approaches for this depending on the ability of your students. The first way is to notice that no polygon has an interior angle greater than 180°. They are always less than this. And no polygon has an interior angle smaller than 60°. (You can see this intuitively from the table.) That means that you need at least three polygons to come together at a vertex (it has to be an integer more than 360/180) and no more than six (it has to be less than 360/60). So,
if it is 3, the interior angles have to be 360°/3 = 120°;
if it is 4, the interior angles have to be 360°/4 = 90°;
if it is 5, the interior angles have to be 360°/5 = 72°; and
if it is 6, the interior angles have to be 360°/6 = 60°.
Only three of these are possible for regular polygons. So the only tessellation by regular polygons requires an equilateral triangle, a square or a hexagon.
- Alternatively, look at the size of interior angles. These have to be (n – 2) 180°/n for an n-sided regular polygon. And this angle has to divide 360°, so 360° divided by (n – 2) 180°/n has to be a whole number. So 2n/(n – 2) has to be a whole number. This can only happen if n = 3, 4 or 6. Perhaps the easiest way to see this is to rewrite 2n/(n – 2) as 2 + 4/(n – 2). Now we see that n – 2 has to divide 4. Hence n – 2 = 1, 2 or 4. So n = 3, 4 or 6. | http://nzmaths.co.nz/resource/fitness | 13 |
121 | Michael Faraday is generally credited with the discovery of induction in 1831 though it may have been anticipated by the work of Francesco Zantedeschi in 1829. Around 1830 to 1832, Joseph Henry made a similar discovery, but did not publish his findings until later.
Faraday's law of induction is a basic law of electromagnetism that predicts how a magnetic field will interact with an electric circuit to produce an electromotive force (EMF). It is the fundamental operating principle of transformers, inductors, and many types of electrical motors, generators and solenoids.
The Maxwell–Faraday equation is a generalisation of Faraday's law, and forms one of Maxwell's equations.
Electromagnetic induction was discovered independently by Michael Faraday and Joseph Henry in 1831; however, Faraday was the first to publish the results of his experiments. In Faraday's first experimental demonstration of electromagnetic induction (August 29, 1831), he wrapped two wires around opposite sides of an iron ring or "torus" (an arrangement similar to a modern toroidal transformer). Based on his assessment of recently discovered properties of electromagnets, he expected that when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. He plugged one wire into a galvanometer, and watched it as he connected the other wire to a battery. Indeed, he saw a transient current (which he called a "wave of electricity") when he connected the wire to the battery, and another when he disconnected it. This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. Within two months, Faraday had found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk").
Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was Maxwell, who used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's papers, the time varying aspect of electromagnetic induction is expressed as a differential equation which Oliver Heaviside referred to as Faraday's law even though it is slightly different in form from the original version of Faraday's law, and does not describe motional EMF. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations.
Lenz's law, formulated by Heinrich Lenz in 1834, describes "flux through the circuit", and gives the direction of the induced EMF and current resulting from electromagnetic induction (elaborated upon in the examples below).
Faraday's Law
Qualitative statement
The most widespread version of Faraday's law states:
The induced electromotive force in any closed circuit is equal to the negative of the time rate of change of the magnetic flux through the circuit.
This version of Faraday's law strictly holds only when the closed circuit is a loop of infinitely thin wire, and is invalid in other circumstances as discussed below. A different version, the Maxwell–Faraday equation (discussed below), is valid in all circumstances.
Faraday's law of induction makes use of the magnetic flux ΦB through a hypothetical surface Σ whose boundary is a wire loop. Since the wire loop may be moving, we write Σ(t) for the surface. The magnetic flux is defined by a surface integral:
where dA is an element of surface area of the moving surface Σ(t), B is the magnetic field, and B·dA is a vector dot product (the infinitesimal amount of magnetic flux). In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic flux lines that pass through the loop.
When the flux changes—because B changes, or because the wire loop is moved or deformed, or both—Faraday's law of induction says that the wire loop acquires an EMF , defined as the energy available per unit charge that travels once around the wire loop (the unit of EMF is the volt). Equivalently, it is the voltage that would be measured by cutting the wire to create an open circuit, and attaching a voltmeter to the leads. According to the Lorentz force law (in SI units),
the EMF on a wire loop is:
where E is the electric field, B is the magnetic field (aka magnetic flux density, magnetic induction), dℓ is an infinitesimal arc length along the wire, and the line integral is evaluated along the wire (along the curve the conincident with the shape of the wire).
The EMF is also given by the rate of change of the magnetic flux:
where N is the number of turns of wire and ΦB is the magnetic flux in webers through a single loop.
Maxwell–Faraday equation
The Maxwell–Faraday equation is a generalisation of Faraday's law that states that a time-varying magnetic field is always accompanied by a spatially-varying, non-conservative electric field, and vice-versa. The Maxwell–Faraday equation is
The Maxwell–Faraday equation is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism. It can also be written in an integral form by the Kelvin-Stokes theorem:
where, as indicated in the figure:
- Σ is a surface bounded by the closed contour ∂Σ,
- E is the electric field, B is the magnetic field.
- dℓ is an infinitesimal vector element of the contour ∂Σ,
- dA is an infinitesimal vector element of surface Σ. If its direction is orthogonal to that surface patch, the magnitude is the area of an infinitesimal patch of surface.
Both dℓ and dA have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin-Stokes theorem. For a planar surface Σ, a positive path element dℓ of curve ∂Σ is defined by the right-hand rule as one that points with the fingers of the right hand when the thumb points in the direction of the normal n to the surface Σ.
The integral around ∂Σ is called a path integral or line integral.
Notice that a nonzero path integral for E is different from the behavior of the electric field generated by charges. A charge-generated E-field can be expressed as the gradient of a scalar field that is a solution to Poisson's equation, and has a zero path integral. See gradient theorem.
The integral equation is true for any path ∂Σ through space, and any surface Σ for which that path is a boundary.
If the path Σ is not changing in time, the equation can be rewritten:
Proof of Faraday's law
The four Maxwell's equations (including the Maxwell–Faraday equation), along with the Lorentz force law, are a sufficient foundation to derive everything in classical electromagnetism. Therefore it is possible to "prove" Faraday's law starting with these equations. Click "show" in the box below for an outline of this proof. (In an alternative approach, not shown here but equally valid, Faraday's law could be taken as the starting point and used to "prove" the Maxwell–Faraday equation and/or other laws.)
Outline of proof of Faraday's law from Maxwell's equations and the Lorentz force law. Consider the time-derivative of flux through a possibly moving loop, with area :
The integral can change over time for two reasons: The integrand can change, or the integration region can change. These add linearly, therefore:
where t0 is any given fixed time. We will show that the first term on the right-hand side corresponds to transformer EMF, the second to motional EMF (see above). The first term on the right-hand side can be rewritten using the integral form of the Maxwell–Faraday equation:
Next, we analyze the second term on the right-hand side:
This is the most difficult part of the proof; more details and alternate approaches can be found in references. As the loop moves and/or deforms, it sweeps out a surface (see figure on right). The magnetic flux through this swept-out surface corresponds to the magnetic flux that is either entering or exiting the loop, and therefore this is the magnetic flux that contributes to the time-derivative. (This step implicitly uses Gauss's law for magnetism: Since the flux lines have no beginning or end, they can only get into the loop by getting cut through by the wire.) As a small part of the loop moves with velocity v for a short time , it sweeps out a vector area vector . Therefore, the change in magnetic flux through the loop here is
where v is the velocity of a point on the loop .
Putting these together,
Meanwhile, EMF is defined as the energy available per unit charge that travels once around the wire loop. Therefore, by the Lorentz force law,
"Counterexamples" to Faraday's law
Faraday's disc electric generator. The disc rotates with angular rate ω, sweeping the conducting radius circularly in the static magnetic field B. The magnetic Lorentz force v × B drives the current along the conducting radius to the conducting rim, and from there the circuit completes through the lower brush and the axle supporting the disc. Thus, current is generated from mechanical motion.
A counterexample to Faraday's Law when over-broadly interpreted. A wire (solid red lines) connects to two touching metal plates (silver) to form a circuit. The whole system sits in a uniform magnetic field, normal to the page. If the word "circuit" is interpreted as "primary path of current flow" (marked in red), then the magnetic flux through the "circuit" changes dramatically as the plates are rotated, yet the EMF is almost zero, which contradicts Faraday's Law. After Feynman Lectures on Physics Vol. II page 17-3
Although Faraday's law is always true for loops of thin wire, it can give the wrong result if naively extrapolated to other contexts. One example is the homopolar generator (above left): A spinning circular metal disc in a homogeneous magnetic field generates a DC (constant in time) EMF. In Faraday's law, EMF is the time-derivative of flux, so a DC EMF is only possible if the magnetic flux is getting uniformly larger and larger perpetually. But in the generator, the magnetic field is constant and the disc stays in the same position, so no magnetic fluxes are growing larger and larger. So this example cannot be analyzed directly with Faraday's law.
Another example, due to Feynman, has a dramatic change in flux through a circuit, even though the EMF is arbitrarily small. See figure and caption above right.
In both these examples, the changes in the current path are different from the motion of the material making up the circuit. The electrons in a material tend to follow the motion of the atoms that make up the material, due to scattering in the bulk and work function confinement at the edges. Therefore, motional EMF is generated when a material's atoms are moving through a magnetic field, dragging the electrons with them, thus subjecting the electrons to the Lorentz force. In the homopolar generator, the material's atoms are moving, even though the overall geometry of the circuit is staying the same. In the second example, the material's atoms are almost stationary, even though the overall geometry of the circuit is changing dramatically. On the other hand, Faraday's law always holds for thin wires, because there the geometry of the circuit always changes in a direct relationship to the motion of the material's atoms.
Both of the above examples can be correctly worked by choosing the appropriate path of integration for Faraday's Law. Outside of context of thin wires, the path must never be chosen to go through the conductor in the shortest direct path. This is explained in detail in "The Electromagnetodynamics of Fluid" by W. F. Hughes and F. J. Young, John Wiley Inc. (1965)
The principles of electromagnetic induction are applied in many devices and systems, including:
- Current clamp
- Electrical generators
- Electromagnetic forming
- Graphics tablet
- Hall effect meters
- Induction cookers
- Induction motors
- Induction sealing
- Induction welding
- Inductive charging
- Magnetic flow meters
- Mechanically powered flashlight
- Rowland ring
- Transcranial magnetic stimulation
- Wireless energy transfer
Electrical generator
The EMF generated by Faraday's law of induction due to relative movement of a circuit and a magnetic field is the phenomenon underlying electrical generators. When a permanent magnet is moved relative to a conductor, or vice versa, an electromotive force is created. If the wire is connected through an electrical load, current will flow, and thus electrical energy is generated, converting the mechanical energy of motion to electrical energy. For example, the drum generator is based upon the figure to the right. A different implementation of this idea is the Faraday's disc, shown in simplified form on the right.
In the Faraday's disc example, the disc is rotated in a uniform magnetic field perpendicular to the disc, causing a current to flow in the radial arm due to the Lorentz force. It is interesting to understand how it arises that mechanical work is necessary to drive this current. When the generated current flows through the conducting rim, a magnetic field is generated by this current through Ampère's circuital law (labeled "induced B" in the figure). The rim thus becomes an electromagnet that resists rotation of the disc (an example of Lenz's law). On the far side of the figure, the return current flows from the rotating arm through the far side of the rim to the bottom brush. The B-field induced by this return current opposes the applied B-field, tending to decrease the flux through that side of the circuit, opposing the increase in flux due to rotation. On the near side of the figure, the return current flows from the rotating arm through the near side of the rim to the bottom brush. The induced B-field increases the flux on this side of the circuit, opposing the decrease in flux due to rotation. Thus, both sides of the circuit generate an emf opposing the rotation. The energy required to keep the disc moving, despite this reactive force, is exactly equal to the electrical energy generated (plus energy wasted due to friction, Joule heating, and other inefficiencies). This behavior is common to all generators converting mechanical energy to electrical energy.
Electrical transformer
The EMF predicted by Faraday's law is also responsible for electrical transformers. When the electric current in a loop of wire changes, the changing current creates a changing magnetic field. A second wire in reach of this magnetic field will experience this change in magnetic field as a change in its coupled magnetic flux, d ΦB / d t. Therefore, an electromotive force is set up in the second loop called the induced EMF or transformer EMF. If the two ends of this loop are connected through an electrical load, current will flow.
Magnetic flow meter
Faraday's law is used for measuring the flow of electrically conductive liquids and slurries. Such instruments are called magnetic flow meters. The induced voltage ℇ generated in the magnetic field B due to a conductive liquid moving at velocity v is thus given by:
where ℓ is the distance between electrodes in the magnetic flow meter.
Eddy currents
Conductors (of finite dimensions) moving through a uniform magnetic field, or stationary within a changing magnetic field, will have currents induced within them. These induced eddy currents can be undesirable, since they dissipate energy in the resistance of the conductor. There are a number of methods employed to control these undesirable inductive effects.
- Electromagnets in electric motors, generators, and transformers do not use solid metal, but instead use thin sheets of metal plate, called laminations. These thin plates reduce the parasitic eddy currents, as described below.
- Inductive coils in electronics typically use magnetic cores to minimize parasitic current flow. They are a mixture of metal powder plus a resin binder that can hold any shape. The binder prevents parasitic current flow through the powdered metal.
Electromagnet laminations
Eddy currents occur when a solid metallic mass is rotated in a magnetic field, because the outer portion of the metal cuts more lines of force than the inner portion, hence the induced electromotive force not being uniform, tends to set up currents between the points of greatest and least potential. Eddy currents consume a considerable amount of energy and often cause a harmful rise in temperature.
Only five laminations or plates are shown in this example, so as to show the subdivision of the eddy currents. In practical use, the number of laminations or punchings ranges from 40 to 66 per inch, and brings the eddy current loss down to about one percent. While the plates can be separated by insulation, the voltage is so low that the natural rust/oxide coating of the plates is enough to prevent current flow across the laminations.
This is a rotor approximately 20mm in diameter from a DC motor used in a CD player. Note the laminations of the electromagnet pole pieces, used to limit parasitic inductive losses.
Parasitic induction within inductors
In this illustration, a solid copper bar inductor on a rotating armature is just passing under the tip of the pole piece N of the field magnet. Note the uneven distribution of the lines of force across the bar inductor. The magnetic field is more concentrated and thus stronger on the left edge of the copper bar (a,b) while the field is weaker on the right edge (c,d). Since the two edges of the bar move with the same velocity, this difference in field strength across the bar creates whorls or current eddies within the copper bar.
High current power-frequency devices such as electric motors, generators and transformers use multiple small conductors in parallel to break up the eddy flows that can form within large solid conductors. The same principle is applied to transformers used at higher than power frequency, for example, those used in switch-mode power supplies and the intermediate frequency coupling transformers of radio receivers.
Faraday's law and relativity
Two phenomena
Some physicists have remarked that Faraday's law is a single equation describing two different phenomena: the motional EMF generated by a magnetic force on a moving wire (see Lorentz force), and the transformer EMF generated by an electric force due to a changing magnetic field (due to the Maxwell–Faraday equation). James Clerk Maxwell drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of part II of that paper, Maxwell gives a separate physical explanation for each of the two phenomena. A reference to these two aspects of electromagnetic induction is made in some modern textbooks. As Richard Feynman states:
So the "flux rule" that the emf in a circuit is equal to the rate of change of the magnetic flux through the circuit applies whether the flux changes because the field changes or because the circuit moves (or both).... Yet in our explanation of the rule we have used two completely distinct laws for the two cases – for "circuit moves" and for "field changes".
We know of no other place in physics where such a simple and accurate general principle requires for its real understanding an analysis in terms of two different phenomena.
— Richard P. Feynman, The Feynman Lectures on Physics
Einstein's view
It is known that Maxwell's electrodynamics—as usually understood at the present time—when applied to moving bodies, leads to asymmetries which do not appear to be inherent in the phenomena. Take, for example, the reciprocal electrodynamic action of a magnet and a conductor. The observable phenomenon here depends only on the relative motion of the conductor and the magnet, whereas the customary view draws a sharp distinction between the two cases in which either the one or the other of these bodies is in motion. For if the magnet is in motion and the conductor at rest, there arises in the neighbourhood of the magnet an electric field with a certain definite energy, producing a current at the places where parts of the conductor are situated. But if the magnet is stationary and the conductor in motion, no electric field arises in the neighbourhood of the magnet. In the conductor, however, we find an electromotive force, to which in itself there is no corresponding energy, but which gives rise—assuming equality of relative motion in the two cases discussed—to electric currents of the same path and intensity as those produced by the electric forces in the former case.
Examples of this sort, together with unsuccessful attempts to discover any motion of the earth relative to the "light medium," suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest.
— Albert Einstein, On the Electrodynamics of Moving Bodies
See also
- S M Dhir (2007). "§6 Other posive results and criticism". Hans Christian Ørsted and the Romantic Legacy in Science:Ideas, Disciplines, Practices. Springer. ISBN 978-1-4020-2987-5.
- "Magnets". ThinkQuest. Retrieved 2009-11-06.
- "Joseph Henry". Notable Names Database. Retrieved 2009-11-06.
- Sadiku, M. N. O. (2007). Elements of Electromagnetics (fourth ed.). New York (USA)/Oxford (UK): Oxford University Press. p. 386. ISBN 0-19-530048-3.
- "Applications of electromagnetic induction". Boston University. 1999-22-07.
- Giancoli, Douglas C. (1998). Physics: Principles with Applications (Fifth ed.). pp. 623–624.
- Ulaby, Fawwaz (2007). Fundamentals of applied electromagnetics (5th ed.). Pearson:Prentice Hall. p. 255. ISBN 0-13-241326-4.
- "Joseph Henry". Distinguished Members Gallery, National Academy of Sciences. Retrieved 2006-11-30.
- Faraday, Michael; Day, P. (1999-02-01). The philosopher's tree: a selection of Michael Faraday's writings. CRC Press. p. 71. ISBN 978-0-7503-0570-9. Retrieved 28 August 2011.
- Michael Faraday, by L. Pearce Williams, p. 182-3
- Michael Faraday, by L. Pearce Williams, p. 191–5
- Michael Faraday, by L. Pearce Williams, p. 510
- Maxwell, James Clerk (1904), A Treatise on Electricity and Magnetism, Vol. II, Third Edition. Oxford University Press, pp. 178–9 and 189.
- "Archives Biographies: Michael Faraday", The Institution of Engineering and Technology.
- Poyser, Arthur William (1892), Magnetism and electricity: A manual for students in advanced classes. London and New York; Longmans, Green, & Co., p. 285, fig. 248. Retrieved 2009-08-06.
- "The flux rule" is the terminology that Feynman uses to refer to the law relating magnetic flux to EMF.Richard Phillips Feynman, Leighton R B & Sands M L (2006). The Feynman Lectures on Physics. San Francisco: Pearson/Addison-Wesley. Vol. II, pp. 17-2. ISBN 0-8053-9049-9.
- Griffiths, David J. (1999). Introduction to Electrodynamics (Third ed.). Upper Saddle River NJ: Prentice Hall. pp. 301–303. ISBN 0-13-805326-X.
- Tipler and Mosca, Physics for Scientists and Engineers, p795, google books link
- Note that different textbooks may give different definitions. The set of equations used throughout the text was chosen to be compatible with the special relativity theory.
- Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, 2nd Edition, 1978, John Murray, ISBN 0-7195-3382-1
- Nave, Carl R. "Faraday's Law". HyperPhysics. Georgia State University. Retrieved 29 August 2011.
- Roger F Harrington (2003). Introduction to electromagnetic engineering. Mineola, NY: Dover Publications. p. 56. ISBN 0-486-43241-6.
- Davison, M. E. (1973). "A Simple Proof that the Lorentz Force, Law Implied Faraday's Law of Induction, when B is Time Independent". American Journal of Physics 41 (5): 713–711. doi:10.1119/1.1987339.
- Basic Theoretical Physics: A Concise Overview by Krey and Owen, p155, google books link
- K. Simonyi, Theoretische Elektrotechnik, 5th edition, VEB Deutscher Verlag der Wissenschaften, Berlin 1973, equation 20, page 47
- Images and reference text are from the public domain book: Hawkins Electrical Guide, Volume 1, Chapter 19: Theory of the Armature, pp. 272–273, Copyright 1917 by Theo. Audel & Co., Printed in the United States
- Images and reference text are from the public domain book: Hawkins Electrical Guide, Volume 1, Chapter 19: Theory of the Armature, pp. 270–271, Copyright 1917 by Theo. Audel & Co., Printed in the United States
- Griffiths, David J. (1999). Introduction to Electrodynamics (Third ed.). Upper Saddle River NJ: Prentice Hall. pp. 301–3. ISBN 0-13-805326-X. Note that the law relating flux to EMF, which this article calls "Faraday's law", is referred to in Griffiths' terminology as the "universal flux rule". Griffiths uses the term "Faraday's law" to refer to what article calls the "Maxwell–Faraday equation". So in fact, in the textbook, Griffiths' statement is about the "universal flux rule".
- A. Einstein, On the Electrodynamics of Moving Bodies
Further reading
- Maxwell, James Clerk (1881), A treatise on electricity and magnetism, Vol. II, Chapter III, §530, p. 178. Oxford, UK: Clarendon Press. ISBN 0-486-60637-6.
- A simple interactive Java tutorial on electromagnetic induction National High Magnetic Field Laboratory
- R. Vega Induction: Faraday's law and Lenz's law - Highly animated lecture
- Notes from Physics and Astronomy HyperPhysics at Georgia State University
- Faraday's Law for EMC Engineers
- Tankersley and Mosca: Introducing Faraday's law
- Lenz's Law at work.
- A free java simulation on motional EMF
- Two videos demonstrating Faraday's and Lenz's laws at EduMation
- Lenz's Law at work. | http://en.wikipedia.org/wiki/Electromagnetic_induction | 13 |
91 | Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 215
How Students Learn: History, Mathematics, and Science in the Classroom Part II MATHEMATICS
OCR for page 216
How Students Learn: History, Mathematics, and Science in the Classroom This page intentionally left blank.
OCR for page 217
How Students Learn: History, Mathematics, and Science in the Classroom 5 Mathematical Understanding: An Introduction Karen C. Fuson, Mindy Kalchman, and John D. Bransford For many people, free association with the word “mathematics” would produce strong, negative images. Gary Larson published a cartoon entitled “Hell’s Library” that consisted of nothing but book after book of math word problems. Many students—and teachers—resonate strongly with this cartoon’s message. It is not just funny to them; it is true. Why are associations with mathematics so negative for so many people? If we look through the lens of How People Learn, we see a subject that is rarely taught in a way that makes use of the three principles that are the focus of this volume. Instead of connecting with, building on, and refining the mathematical understandings, intuitions, and resourcefulness that students bring to the classroom (Principle 1), mathematics instruction often overrides students’ reasoning processes, replacing them with a set of rules and procedures that disconnects problem solving from meaning making. Instead of organizing the skills and competences required to do mathematics fluently around a set of core mathematical concepts (Principle 2), those skills and competencies are often themselves the center, and sometimes the whole, of instruction. And precisely because the acquisition of procedural knowledge is often divorced from meaning making, students do not use metacognitive strategies (Principle 3) when they engage in solving mathematics problems. Box 5-1 provides a vignette involving a student who gives an answer to a problem that is quite obviously impossible. When quizzed, he can see that his answer does not make sense, but he does not consider it wrong because he believes he followed the rule. Not only did he neglect to use metacognitive strategies to monitor whether his answer made sense, but he believes that sense making is irrelevant.
OCR for page 218
How Students Learn: History, Mathematics, and Science in the Classroom BOX 5-1 Computation Without Comprehension: An Observation by John Holt One boy, quite a good student, was working on the problem, “If you have 6 jugs, and you want to put 2/3 of a pint of lemonade into each jug, how much lemonade will you need?” His answer was 18 pints. I said, “How much in each jug?” “Two-thirds of a pint.” I said, “Is that more or less that a pint?” “Less.” I said, “How many jugs are there?” “Six.” I said, “But that [the answer of 18 pints] doesn’t make any sense.” He shrugged his shoulders and said, “Well, that’s the way the system worked out.” Holt argues: “He has long since quit expecting school to make sense. They tell you these facts and rules, and your job is to put them down on paper the way they tell you. Never mind whether they mean anything or not.”1 A recent report of the National Research Council,2 Adding It Up, reviews a broad research base on the teaching and learning of elementary school mathematics. The report argues for an instructional goal of “mathematical proficiency,” a much broader outcome than mastery of procedures. The report argues that five intertwining strands constitute mathematical proficiency: Conceptual understanding—comprehension of mathematical concepts, operations, and relations Procedural fluency—skill in carrying out procedures flexibly, accurately, efficiently, and appropriately Strategic competence—ability to formulate, represent, and solve mathematical problems Adaptive reasoning—capacity for logical thought, reflection, explanation, and justification Productive disposition—habitual inclination to see mathematics as sensible, useful, and worthwhile, coupled with a belief in diligence and one’s own efficacy These strands map directly to the principles of How People Learn. Principle 2 argues for a foundation of factual knowledge (procedural fluency), tied to a conceptual framework (conceptual understanding), and organized in a way to facilitate retrieval and problem solving (strategic competence). Metacognition and adaptive reasoning both describe the phenomenon of ongoing sense making, reflection, and explanation to oneself and others. And, as we argue below, the preconceptions students bring to the study of mathematics affect more than their understanding and problem solving; those preconceptions also play a major role in whether students have a productive
OCR for page 219
How Students Learn: History, Mathematics, and Science in the Classroom disposition toward mathematics, as do, of course, their experiences in learning mathematics. The chapters that follow on whole number, rational number, and functions look at the principles of How People Learn as they apply to those specific domains. In this introduction, we explore how those principles apply to the subject of mathematics more generally. We draw on examples from the Children’s Math World project, a decade-long research project in urban and suburban English-speaking and Spanish-speaking classrooms.3 PRINCIPLE #1: TEACHERS MUST ENGAGE STUDENTS’ PRECONCEPTIONS At a very early age, children begin to demonstrate an awareness of number.4 As with language, that awareness appears to be universal in normally developing children, though the rate of development varies at least in part because of environmental influences.5 But it is not only the awareness of quantity that develops without formal training. Both children and adults engage in mathematical problem solving, developing untrained strategies to do so successfully when formal experiences are not provided. For example, it was found that Brazilian street children could perform mathematics when making sales in the street, but were unable to answer similar problems presented in a school context.6 Likewise, a study of housewives in California uncovered an ability to solve mathematical problems when comparison shopping, even though the women could not solve problems presented abstractly in a classroom that required the same mathematics.7 A similar result was found in a study of a group of Weight Watchers, who used strategies for solving mathematical measurement problems related to dieting that they could not solve when the problems were presented more abstractly.8 And men who successfully handicapped horse races could not apply the same skill to securities in the stock market.9 These examples suggest that people possess resources in the form of informal strategy development and mathematical reasoning that can serve as a foundation for learning more abstract mathematics. But they also suggest that the link is not automatic. If there is no bridge between informal and formal mathematics, the two often remain disconnected. The first principle of How People Learn emphasizes both the need to build on existing knowledge and the need to engage students’ preconceptions—particularly when they interfere with learning. In mathematics, certain preconceptions that are often fostered early on in school settings are in fact counterproductive. Students who believe them can easily conclude that the study of mathematics is “not for them” and should be avoided if at all possible. We discuss these preconceptions below.
OCR for page 220
How Students Learn: History, Mathematics, and Science in the Classroom Some Common Preconceptions About Mathematics Preconception #1: Mathematics is about learning to compute. Many of us who attended school in the United States had mathematics instruction that focused primarily on computation, with little attention to learning with understanding. To illustrate, try to answer the following question: What, approximately, is the sum of 8/9 plus 12/13? Many people immediately try to find the lowest common denominator for the two sets of fractions and then add them because that is the procedure they learned in school. Finding the lowest common denominator is not easy in this instance, and the problem seems difficult. A few people take a conceptual rather than a procedural (computational) approach and realize that 8/9 is almost 1, and so is 12/13, so the approximate answer is a little less than 2. The point of this example is not that computation should not be taught or is unimportant; indeed, it is very often critical to efficient problem solving. But if one believes that mathematics is about problem solving and that computation is a tool for use to that end when it is helpful, then the above problem is viewed not as a “request for a computation,” but as a problem to be solved that may or may not require computation—and in this case, it does not. If one needs to find the exact answer to the above problem, computation is the way to go. But even in this case, conceptual understanding of the nature of the problem remains central, providing a way to estimate the correctness of a computation. If an answer is computed that is more than 2 or less than 1, it is obvious that some aspect of problem solving has gone awry. If one believes that mathematics is about computation, however, then sense making may never take place. Preconception #2: Mathematics is about “following rules” to guarantee correct answers. Related to the conception of mathematics as computation is that of mathematics as a cut-and-dried discipline that specifies rules for finding the right answers. Rule following is more general than performing specific computations. When students learn procedures for keeping track of and canceling units, for example, or learn algebraic procedures for solving equations, many
OCR for page 221
How Students Learn: History, Mathematics, and Science in the Classroom view use of these procedures only as following the rules. But the “rules” should not be confused with the game itself. The authors of the chapters in this part of the book provide important suggestions about the much broader nature of mathematical proficiency and about ways to make the involving nature of mathematical inquiry visible to students. Groups such as the National Council of Teachers of Mathematics10 and the National Research Council11 have provided important guidelines for the kinds of mathematics instruction that accord with what is currently known about the principles of How People Learn. The authors of the following chapters have paid careful attention to this work and illustrate some of its important aspects. In reality, mathematics is a constantly evolving field that is far from cut and dried. It involves systematic pattern finding and continuing invention. As a simple example, consider the selection of units that are relevant to quantify an idea such as the fuel efficiency of a vehicle. If we choose miles per gallon, a two-seater sports car will be more efficient than a large bus. If we choose passenger miles per gallon, the bus will be more fuel efficient (assuming it carries large numbers of passengers). Many disciplines make progress by inventing new units and metrics that provide insights into previously invisible relationships. Attention to the history of mathematics illustrates that what is taught at one point in time as a set of procedures really was a set of clever inventions designed to solve pervasive problems of everyday life. In Europe in the Middle Ages, for example, people used calculating cloths marked with vertical columns and carried out procedures with counters to perform calculations. Other cultures fastened their counters on a rod to make an abacus. Both of these physical means were at least partially replaced by written methods of calculating with numerals and more recently by methods that involve pushing buttons on a calculator. If mathematics procedures are understood as inventions designed to make common problems more easily solvable, and to facilitate communications involving quantity, those procedures take on a new meaning. Different procedures can be compared for their advantages and disadvantages. Such discussions in the classroom can deepen students’ understanding and skill. Preconception #3: Some people have the ability to “do math” and some don’t. This is a serious preconception that is widespread in the United States, but not necessarily in other countries. It can easily become a self-fulfilling prophesy. In many countries, the ability to “do math” is assumed to be attributable to the amount of effort people put into learning it.12 Of course,
OCR for page 222
How Students Learn: History, Mathematics, and Science in the Classroom some people in these countries do progress further than others, and some appear to have an easier time learning mathematics than others. But effort is still considered to be the key variable in success. In contrast, in the United States we are more likely to assume that ability is much more important than effort, and it is socially acceptable, and often even desirable, not to put forth effort in learning mathematics. This difference is also related to cultural differences in the value attributed to struggle. Teachers in some countries believe it is desirable for students to struggle for a while with problems, whereas teachers in the United States simplify things so that students need not struggle at all.13 This preconception likely shares a common root with the others. If mathematics learning is not grounded in an understanding of the nature of the problem to be solved and does not build on a student’s own reasoning and strategy development, then solving problems successfully will depend on the ability to recall memorized rules. If a student has not reviewed those rules recently (as is the case when a summer has passed), they can easily be forgotten. Without a conceptual understanding of the nature of problems and strategies for solving them, failure to retrieve learned procedures can leave a student completely at a loss. Yet students can feel lost not only when they have forgotten, but also when they fail to “get it” from the start. Many of the conventions of mathematics have been adopted for the convenience of communicating efficiently in a shared language. If students learn to memorize procedures but do not understand that the procedures are full of such conventions adopted for efficiency, they can be baffled by things that are left unexplained. If students never understand that x and y have no intrinsic meaning, but are conventional notations for labeling unknowns, they will be baffled when a z appears. When an m precedes an x in the equation of a line, students may wonder, Why m? Why not s for slope? If there is no m, then is there no slope? To someone with a secure mathematics understanding, the missing m is simply an unstated m = 1. But to a student who does not understand that the point is to write the equation efficiently, the missing m can be baffling. Unlike language learning, in which new expressions can often be figured out because they are couched in meaningful contexts, there are few clues to help a student who is lost in mathematics. Providing a secure conceptual understanding of the mathematics enterprise that is linked to students’ sense-making capacities is critical so that students can puzzle productively over new material, identify the source of their confusion, and ask questions when they do not understand.
OCR for page 223
How Students Learn: History, Mathematics, and Science in the Classroom Engaging Students’ Preconceptions and Building on Existing Knowledge Engaging and building on student preconceptions, then, poses two instructional challenges. First, how can we teach mathematics so students come to appreciate that it is not about computation and following rules, but about solving important and relevant quantitative problems? This perspective includes an understanding that the rules for computation and solution are a set of clever human inventions that in many cases allow us to solve complex problems more easily, and to communicate about those problems with each other effectively and efficiently. Second, how can we link formal mathematics training with students’ informal knowledge and problem-solving capacities? Many recent research and curriculum development efforts, including those of the authors of the chapters that follow, have addressed these questions. While there is surely no single best instructional approach, it is possible to identify certain features of instruction that support the above goals: Allowing students to use their own informal problem-solving strategies, at least initially, and then guiding their mathematical thinking toward more effective strategies and advanced understandings. Encouraging math talk so that students can clarify their strategies to themselves and others, and compare the benefits and limitations of alternate approaches. Designing instructional activities that can effectively bridge commonly held conceptions and targeted mathematical understandings. Allowing Multiple Strategies To illustrate how instruction can be connected to students’ existing knowledge, consider three subtraction methods encountered frequently in urban second-grade classrooms involved in the Children’s Math Worlds Project (see Box 5-2). Maria, Peter, and Manuel’s teacher has invited them to share their methods for solving a problem, and each of them has displayed a different method. Two of the methods are correct, and one is mostly correct but has one error. What the teacher does depends on her conception of what mathematics is. One approach is to show the students the “right” way to subtract and have them and everyone else practice that procedure. A very different approach is to help students explore their methods and see what is easy and difficult about each. If students are taught that for each kind of math situation or problem, there is one correct method that needs to be taught and learned, the seeds of the disconnection between their reasoning and strategy development and “doing math” are sown. An answer is either wrong or
OCR for page 224
How Students Learn: History, Mathematics, and Science in the Classroom BOX 5-2 Three Subtraction Methods right, and one does not need to look at wrong answers more deeply—one needs to look at how to get the right answer. The problem is not that students will fail to solve the problem accurately with this instructional approach; indeed, they may solve it more accurately. But when the nature of the problem changes slightly, or students have not used the taught approach for a while, they may feel completely lost when confronting a novel problem because the approach of developing strategies to grapple with a problem situation has been short-circuited. If, on the other hand, students believe that for each kind of math situation or problem there can be several correct methods, their engagement in strategy development is kept alive. This does not mean that all strategies are equally good. But students can learn to evaluate different strategies for their advantages and disadvantages. What is more, a wrong answer is usually partially correct and reflects some understanding; finding the part that is wrong and understanding why it is wrong can be a powerful aid to understanding and promotes metacognitive competencies. A vignette of students engaged in the kind of mathematical reasoning that supports active strategy development and evaluation appears in Box 5-3. It can be initially unsettling for a teacher to open up the classroom to calculation methods that are new to the teacher. But a teacher does not have to understand a new method immediately or alone, as indicated in the description in the vignette of how the class together figured out over time how Maria’s method worked (this method is commonly taught in Latin America and Europe). Understanding a new method can be a worthwhile mathematical project for the class, and others can be involved in trying to figure out why a method works. This illustrates one way in which a classroom community can function. If one relates a calculation method to the quantities involved, one can usually puzzle out what the method is and why it works. This also demonstrates that not all mathematical issues are solved or understood immediately; sometimes sustained work is necessary.
OCR for page 225
How Students Learn: History, Mathematics, and Science in the Classroom BOX 5-3 Engaging Students’ Problem-Solving Strategies The following example of a classroom discussion shows how second-grade students can explain their methods rather than simply performing steps in a memorized procedure. It also shows how to make student thinking visible. After several months of teaching and learning, the students reached the point illustrated below. The students’ methods are shown in Box 5-2. Teacher Maria, can you please explain to your friends in the class how you solved the problem? Maria Six is bigger than 4, so I can’t subtract here [pointing] in the ones. So I have to get more ones. But I have to be fair when I get more ones, so I add ten to both my numbers. I add a ten here in the top of the ones place [pointing] to change the 4 to a 14, and I add a ten here in the bottom in the tens place, so I write another ten by my 5. So now I count up from 6 to 14, and I get 8 ones [demonstrating by counting “6, 7, 8, 9, 10, 11, 12, 13, 14” while raising a finger for each word from 7 to 14]. And I know my doubles, so 6 plus 6 is 12, so I have 6 tens left. [She thought, “1 + 5 = 6 tens and 6 + ? = 12 tens. Oh, I know 6 + 6 = 12, so my answer is 6 tens.”] Jorge I don’t see the other 6 in your tens. I only see one 6 in your answer. Maria The other 6 is from adding my 1 ten to the 5 tens to get 6 tens. I didn’t write it down. Andy But you’re changing the problem. How do you get the right answer? Maria If I make both numbers bigger by the same amount, the difference will stay the same. Remember we looked at that on drawings last week and on the meter stick. Michelle Why did you count up? Maria Counting down is too hard, and my mother taught me to count up to subtract in first grade.
OCR for page 246
How Students Learn: History, Mathematics, and Science in the Classroom this work indicates that we have begun the crucial journey into mathematical proficiency for all and that the principles of How People Learn can guide us on this journey. NOTES 1. Holt, 1964, pp. 143-144. 2. National Research Council, 2001. 3. See Fuson, 1986a, 1986b, 1990; Fuson and Briars, 1990; Fuson and Burghardt, 1993, 1997; Fuson et al., 1994, 2000; Fuson and Smith, 1997; Fuson, Smith, and Lott, 1977; Fuson, Wearne et al., 1997; Fuson, Lo Cicero et al., 1997; Lo Cicero et al., 1999; Fuson et al., 2000; Ron, 1998. 4. Carey, 2001; Gelman, 1990; Starkey et al., 1990; Wynn, 1996; Canfield and Smith, 1996. 5. Case et al., 1999; Ginsburg, 1984; Saxe, 1982. 6. Carraher, 1986; Carraher et al., 1985. 7. Lave, 1988; Sternberg, 1999. 8. De la Rocha, 1986. 9. Ceci and Liker, 1986; Ceci, 1996. 10. National Council of Teachers of Mathematics, 2000. 11. National Research Council, 2001. 12. See, e.g., Hatano and Inagaki, 1996; Resnick, 1987; Stigler and Heibert, 1997. 13. Stigler and Heibert, 1999. 14. National Research Council, 2004. 15. See, e.g., Tobias, 1978. 16. Hufferd-Ackles et al., 2004. 17. Sherin, 2000a, 2002. 18. See, e.g., Bransford et al., 1989. 19. See, e.g., Schwartz and Moore, 1998. 20. Sherin, 2000b, 2001. 21. Lewis, 2002, p. 1. 22. Fernandez, 2002; Lewis, 2002; Stigler and Heibert, 1999. 23. Remillard, 1999, 2000. 24. Remillard and Geist, 2002. 25. Remillard, 2000. REFERENCES Anghileri, J. (1989). An investigation of young children’s understanding of multiplication. Educational Studies in Mathematics, 20, 367-385. Ashlock, R.B. (1998). Error patterns in computation. Upper Saddle River, NJ: Prentice-Hall. Baek, J.-M. (1998). Children’s invented algorithms for multidigit multiplication problems. In L.J. Morrow and M.J. Kenney (Eds.), The teaching and learning of algorithms in school mathematics. Reston, VA: National Council of Teachers of Mathematics.
OCR for page 247
How Students Learn: History, Mathematics, and Science in the Classroom Baroody, A.J., and Coslick, R.T. (1998). Fostering children’s mathematical power: An investigative approach to k-8 mathematics instruction. Mahwah, NJ: Lawrence Erlbaum Associates. Baroody, A.J., and Ginsburg, H.P. (1986). The relationship between initial meaningful and mechanical knowledge of arithmetic. In J. Hiebert (Ed.), Conceptual and procedural knowledge: The case of mathematics (pp. 75-112). Mahwah, NJ: Lawrence Erlbaum Associates. Beishuizen, M. (1993). Mental strategies and materials or models for addition and subtraction up to 100 in Dutch second grades. Journal for Research in Mathematics Education, 24, 294-323. Beishuizen, M., Gravemeijer, K.P.E., and van Lieshout, E.C.D.M. (Eds.). (1997). The role of contexts and models in the development of mathematical strategies and procedures. Utretch, The Netherlands: CD-B Press/The Freudenthal Institute. Bergeron, J.C., and Herscovics, N. (1990). Psychological aspects of learning early arithmetic. In P. Nesher and J. Kilpatrick (Eds.), Mathematics and cognition: A research synthesis by the International Group for the Psychology of Mathematics Education. Cambridge, England: Cambridge University Press. Bransford, J.D., Franks, J.J., Vye, N.J., and Sherwood, R.D. (1989). New approaches to instruction: Because wisdom can’t be told. In S. Vasniadou and A. Ortony (Eds.), Similarity and analogical reasoning (pp. 470-497). New York: Cambridge University Press. Brophy, J. (1997). Effective instruction. In H.J. Walberg and G.D. Haertel (Eds.), Psychology and educational practice (pp. 212-232). Berkeley, CA: McCutchan. Brownell, W.A. (1987). AT Classic: Meaning and skill—maintaining the balance. Arithmetic Teacher, 34(8), 18-25. Canfield, R.L., and Smith, E.G. (1996). Number-based expectations and sequential enumeration by 5-month-old infants. Developmental Psychology, 32, 269-279. Carey, S. (2001). Evolutionary and ontogenetic foundations of arithmetic. Mind and Language, 16(1), 37-55. Carpenter, T.P., and Moser, J.M. (1984). The acquisition of addition and subtraction concepts in grades one through three. Journal for Research in Mathematics Education, 15(3), 179-202. Carpenter, T.P., Fennema, E., Peterson, P.L., Chiang, C.P., and Loef, M. (1989). Using knowledge of children’s mathematics thinking in classroom teaching: An experimental study. American Educational Research Journal, 26(4), 499-531. Carpenter, T.P., Franke, M.L., Jacobs, V., and Fennema, E. (1998). A longitudinal study of invention and understanding in children’s multidigit addition and subtraction. Journal for Research in Mathematics Education, 29, 3-20. Carraher, T.N. (1986). From drawings to buildings: Mathematical scales at work. International Journal of Behavioural Development, 9, 527-544. Carraher, T.N., Carraher, D.W., and Schliemann, A.D. (1985). Mathematics in the streets and in schools. British Journal of Developmental Psychology, 3, 21-29. Carroll, W.M. (2001). A longitudinal study of children using the reform curriculum everyday mathematics. Available: http://everydaymath.uchicago.edu/educators/references.shtml [accessed September 2004].
OCR for page 248
How Students Learn: History, Mathematics, and Science in the Classroom Carroll, W.M., and Fuson, K.C. (1999). Achievement results for fourth graders using the standards-based curriculum everyday mathematics. Unpublished document, University of Chicago, Illinois. Carroll, W.M., and Porter, D. (1998). Alternative algorithms for whole-number operations. In L.J. Morrow and M.J. Kenney (Eds.), The teaching and learning of algorithms in school mathematics (pp. 106-114). Reston, VA: National Council of Teachers of Mathematics. Case, R. (1985). Intellectual development: Birth to adulthood. New York: Academic Press. Case, R. (1992). The mind’s staircase: Exploring the conceptual underpinnings of children’s thought and knowledge. Mahwah, NJ: Lawrence Erlbaum Associates. Case, R. (1998). A psychological model of number sense and its development. Paper presented at the annual meeting of the American Educational Research Association, April, San Diego, CA. Case, R., and Sandieson, R. (1988). A developmental approach to the identification and teaching of central conceptual structures in mathematics and science in the middle grades. In M. Behr and J. Hiebert (Eds.), Research agenda in mathematics education: Number concepts and in the middle grades (pp. 136-270). Mahwah, NJ: Lawrence Erlbaum Associates. Case, R., Griffin, S., and Kelly, W.M. (1999). Socioeconomic gradients in mathematical ability and their responsiveness to intervention during early childhood. In D.P. Keating and C. Hertzman (Eds.), Developmental health and the wealth of nations: Social, biological, and educational dynamics (pp. 125-149). New York: Guilford Press. Ceci, S.J. (1996). On intelligence: A bioecological treatise on intellectual development. Cambridge, MA: Harvard University Press. Ceci, S.J., and Liker, J.K. (1986). A day at the races: A study of IQ, expertise, and cognitive complexity. Journal of Experimental Psychology, 115(3), 255-266. Cotton, K. (1995). Effective schooling practices: A research synthesis. Portland, OR: Northwest Regional Lab. Davis, R.B. (1984). Learning mathematics: The cognitive science approach to mathematics education. Norwood, NJ: Ablex. De la Rocha, O.L. (1986). The reorganization of arithmetic practice in the kitchen. Anthropology and Education Quarterly, 16(3), 193-198. Dixon, R.C., Carnine, S.W., Kameenui, E.J., Simmons, D.C., Lee, D.S., Wallin, J., and Chard, D. (1998). Executive summary. Report to the California State Board of Education, review of high-quality experimental research. Eugene, OR: National Center to Improve the Tools of Educators. Dossey, J.A., Swafford, J.O., Parmantie, M., and Dossey, A.E. (Eds.). (2003). Multidigit addition and subtraction methods invented in small groups and teacher support of problem solving and reflection. In A. Baroody and A. Dowker (Eds.), The development of arithmetic concepts and skills: Constructing adaptive expertise. Mahwah, NJ: Lawrence Erlbaum Associates. Fernandez, C. (2002). Learning from Japanese approaches to professional development. The case of lesson study. Journal of Teacher Education, 53(5), 393-405.
OCR for page 249
How Students Learn: History, Mathematics, and Science in the Classroom Fraivillig, J.L., Murphy, L.A., and Fuson, K.C. (1999). Advancing children’s mathematical thinking in everyday mathematics reform classrooms. Journal for Research in Mathematics Education, 30, 148-170. Fuson, K.C. (1986a). Roles of representation and verbalization in the teaching of multidigit addition and subtraction. European Journal of Psychology of Education, 1, 35-56. Fuson, K.C. (1986b). Teaching children to subtract by counting up. Journal for Research in Mathematics Education, 17, 172-189. Fuson, K.C. (1990). Conceptual structures for multiunit numbers: Implications for learning and teaching multidigit addition, subtraction, and place value. Cognition and Instruction, 7, 343-403. Fuson, K.C. (1992a). Research on learning and teaching addition and subtraction of whole numbers. In G. Leinhardt, R.T. Putnam, and R.A. Hattrup (Eds.), The analysis of arithmetic for mathematics teaching (pp. 53-187). Mahwah, NJ: Lawrence Erlbaum Associates. Fuson, K.C. (1992b). Research on whole number addition and subtraction. In D. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 243-275). New York: Macmillan. Fuson, K.C. (2003). Developing mathematical power in whole number operations. In J. Kilpatrick, W.G. Martin, and D. Schifter (Eds.), A research companion to principles and standards for school mathematics (pp. 68-94). Reston, VA: National Council of Teachers of Mathematics. Fuson, K.C., and Briars, D.J. (1990). Base-ten blocks as a first- and second-grade learning/teaching approach for multidigit addition and subtraction and place-value concepts. Journal for Research in Mathematics Education, 21, 180-206. Fuson, K.C., and Burghardt, B.H. (1993). Group case studies of second graders inventing multidigit addition procedures for base-ten blocks and written marks. In J.R. Becker and B.J. Pence (Eds.), Proceedings of the fifteenth annual meeting of the North American chapter of the international group for the psychology of mathematics education (pp. 240-246). San Jose, CA: The Center for Mathematics and Computer Science Education, San Jose State University. Fuson, K.C., and Burghardt, B.H. (1997). Group case studies of second graders inventing multidigit subtraction methods. In Proceedings of the 19th annual meeting of the North American chapter of the international group for the psychology of mathematics education (pp. 291-298). San Jose, CA: The Center for Mathematics and Computer Science Education, San Jose State University. Fuson, K.C., and Fuson, A.M. (1992). Instruction to support children’s counting on for addition and counting up for subtraction. Journal for Research in Mathematics Education, 23, 72-78. Fuson, K.C., and Kwon, Y. (1992). Korean children’s understanding of multidigit addition and subtraction. Child Development, 63(2), 491-506. Fuson, K.C., and Secada, W.G. (1986). Teaching children to add by counting with finger patterns. Cognition and Instruction, 3, 229-260.
OCR for page 250
How Students Learn: History, Mathematics, and Science in the Classroom Fuson, K.C., and Smith, T. (1997). Supporting multiple 2-digit conceptual structures and calculation methods in the classroom: Issues of conceptual supports, instructional design, and language. In M. Beishuizen, K.P.E. Gravemeijer, and E.C.D.M. van Lieshout (Eds.), The role of contexts and models in the development of mathematical strategies and procedures (pp. 163-198). Utrecht, The Netherlands: CD-B Press/The Freudenthal Institute. Fuson, K.C., Stigler, J., and Bartsch, K. (1988). Grade placement of addition and subtraction topics in Japan, mainland China, the Soviet Union, Taiwan, and the United States. Journal for Research in Mathematics Education, 19(5), 449-456. Fuson, K.C., Perry, T., and Kwon, Y. (1994). Latino, Anglo, and Korean children’s finger addition methods. In J.E.H. van Luit (Ed.), Research on learning and instruction of mathematics in kindergarten and primary school, (pp. 220-228). Doetinchem/Rapallo, The Netherlands: Graviant. Fuson, K.C., Perry, T., and Ron, P. (1996). Developmental levels in culturally different finger methods: Anglo and Latino children’s finger methods of addition. In E. Jakubowski, D. Watkins, and H. Biske (Eds.), Proceedings of the 18th annual meeting of the North American chapter for the psychology of mathematics education (2nd edition, pp. 347-352). Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Environmental Education. Fuson, K.C., Lo Cicero, A., Hudson, K., and Smith, S.T. (1997). Snapshots across two years in the life of an urban Latino classroom. In J. Hiebert, T. Carpenter, E. Fennema, K.C. Fuson, D. Wearne, H. Murray, A. Olivier, and P. Human (Eds.), Making sense: Teaching and learning mathematics with understanding (pp. 129-159). Portsmouth, NH: Heinemann. Fuson, K.C., Smith, T., and Lo Cicero, A. (1997). Supporting Latino first graders’ ten-structured thinking in urban classrooms. Journal for Research in Mathematics Education, 28, 738-760. Fuson, K.C., Wearne, D., Hiebert, J., Murray, H., Human, P., Olivier, A., Carpenter, T., and Fennema, E. (1997). Children’s conceptual structures for multidigit numbers and methods of multidigit addition and subtraction. Journal for Research in Mathematics Education, 28, 130-162. Fuson, K.C., De La Cruz, Y., Smith, S., Lo Cicero, A., Hudson, K., Ron, P., and Steeby, R. (2000). Blending the best of the 20th century to achieve a mathematics equity pedagogy in the 21st century. In M.J. Burke and F.R. Curcio (Eds.), Learning mathematics for a new century (pp. 197-212). Reston, VA: National Council of Teachers of Mathematics. Geary, D.C. (1994). Children’s mathematical development: Research and practical applications. Washington, DC: American Psychological Association. Gelman, R. (1990). First principles organize attention to and learning about relevant data: Number and the animate-inanimate distinction as examples. Cognitive Science, 14, 79-106. Ginsburg, H.P. (1984). Children’s arithmetic: The learning process. New York: Van Nostrand. Ginsburg, H.P., and Allardice, B.S. (1984). Children’s difficulties with school mathematics. In B. Rogoff and J. Lave (Eds.), Everyday cognition: Its development in social contexts (pp. 194-219). Cambridge, MA: Harvard University Press.
OCR for page 251
How Students Learn: History, Mathematics, and Science in the Classroom Ginsburg, H.P., and Russell, R.L. (1981). Social class and racial influences on early mathematical thinking. Monographs of the Society for Research in Child Development 44(6, serial #193). Malden, MA: Blackwell. Goldman, S.R., Pellegrino, J.W., and Mertz, D.L. (1988). Extended practice of basic addition facts: Strategy changes in learning-disabled students. Cognition and Instruction, 5(3), 223-265. Goldman, S.R., Hasselbring, T.S., and the Cognition and Technology Group at Vanderbilt (1997). Achieving meaningful mathematics literacy for students with learning disabilities. Journal of Learning Disabilities, March 1(2), 198-208. Greer, B. (1992). Multiplication and division as models of situation. In D. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 276-295). New York: Macmillan. Griffin, S., and Case, R. (1997). Re-thinking the primary school math curriculum: An approach based on cognitive science. Issues in Education, 3(1), 1-49. Griffin, S., Case, R., and Siegler, R.S. (1994 ). Rightstart: Providing the central conceptual structures for children at risk of school failure. In K. McGilly (Ed.), Classroom lessons: Integrating cognitive theory and classroom practice (pp. 13-48). Mahwah, NJ: Lawrence Erlbaum Associates. Grouws, D. (1992). Handbook of research on mathematics teaching and learning. New York: Teachers College Press. Hamann, M.S., and Ashcraft, M.H. (1986). Textbook presentations of the basic addition facts. Cognition and Instruction, 3, 173-192. Hart, K.M. (1987). Practical work and formalisation, too great a gap. In J.C. Bergeron, N. Hersovics, and C. Kieren (Eds.), Proceedings from the eleventh international conference for the psychology of mathematics education (vol. 2, pp. 408-415). Montreal, Canada: University of Montreal. Hatano, G., and Inagaki, K. (1996). Cultural contexts of schooling revisited. A review of the learning gap from a cultural psychology perspective. Paper presented at the Conference on Global Prospects for Education: Development, Culture, and Schooling, University of Michigan. Hiebert, J. (1986). Conceptual and procedural knowledge: The case of mathematics. Mahwah, NJ: Lawrence Erlbaum Associates. Hiebert, J. (1992). Mathematical, cognitive, and instructional analyses of decimal fractions. In G. Leinhardt, R. Putnam, and R.A. Hattrup (Eds.), The analysis of arithmetic for mathematics teaching (pp. 283-322). Mahwah, NJ: Lawrence Erlbaum Associates. Hiebert, J., and Carpenter, T.P. (1992). Learning and teaching with understanding. In D. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 65-97). New York: Macmillan. Hiebert, J., and Wearne, D. (1986). Procedures over concepts: The acquisition of decimal number knowledge. In J. Hiebert (Ed.), Conceptual and procedural knowledge: The case of mathematics (pp. 199-223). Mahwah, NJ: Lawrence Erlbaum Associates. Hiebert, J., Carpenter, T., Fennema, E., Fuson, K.C., Murray, H., Olivier, A., Human, P., and Wearne, D. (1996). Problem solving as a basis for reform in curriculum and instruction: The case of mathematics. Educational Researcher, 25(4), 12-21.
OCR for page 252
How Students Learn: History, Mathematics, and Science in the Classroom Hiebert, J., Carpenter, T., Fennema, E., Fuson, K.C., Wearne, D., Murray, H., Olivier, A., and Human, P. (1997). Making sense: Teaching and learning mathematics with understanding. Portsmouth, NH: Heinemann. Holt, J. (1964). How children fail. New York: Dell. Hufferd-Ackles, K., Fuson, K., and Sherin, M.G. (2004). Describing levels and components of a math-talk community. Journal for Research in Mathematics Education, 35(2), 81-116. Isaacs, A.C., and Carroll, W.M. (1999). Strategies for basic-facts instruction. Teaching Children Mathematics, 5(9), 508-515. Kalchman, M., and Case, R. (1999). Diversifying the curriculum in a mathematics classroom streamed for high-ability learners: A necessity unassumed. School Science and Mathematics, 99(6), 320-329. Kameenui, E.J., and Carnine, D.W. (Eds.). (1998). Effective teaching strategies that accommodate diverse learners. Upper Saddle River, NJ: Prentice-Hall. Kerkman, D.D., and Siegler, R.S. (1993). Individual differences and adaptive flexibility in lower-income children’s strategy choices. Learning and Individual Differences, 5(2), 113-136. Kilpatrick, J., Martin, W.G., and Schifter, D. (Eds.). (2003). A research companion to principles and standards for school mathematics. Reston, VA: National Council of Teachers of Mathematics. Knapp, M.S. (1995). Teaching for meaning in high-poverty classrooms. New York: Teachers College Press. Lampert, M. (1986). Knowing, doing, and teaching multiplication. Cognition and Instruction, 3, 305-342. Lampert, M. (1992). Teaching and learning long division for understanding in school. In G. Leinhardt, R.T. Putnam, and R.A. Hattrup (Eds.), The analysis of arithmetic for mathematics teaching (pp. 221-282). Mahwah, NJ: Lawrence Erlbaum Associates. Lave, J. (1988). Cognition in practice: Mind, mathematics and culture in everyday life. London, England: Cambridge University Press. LeFevre, J., and Liu, J. (1997). The role of experience in numerical skill: Multiplication performance in adults from Canada and China. Mathematical Cognition, 3(1), 31-62. LeFevre, J., Kulak, A.G., and Bisantz, J. (1991). Individual differences and developmental change in the associative relations among numbers. Journal of Experimental Child Psychology, 52, 256-274. Leinhardt, G., Putnam, R.T., and Hattrup, R.A. (Eds.). (1992). The analysis of arithmetic for mathematics teaching. Mahwah, NJ: Lawrence Erlbaum Associates. Lemaire, P., and Siegler, R.S. (1995). Four aspects of strategic change: Contributions to children’s learning of multiplication. Journal of Experimental Psychology: General, 124(1), 83-97. Lemaire, P., Barrett, S.E., Fayol, M., and Abdi, H. (1994). Automatic activation of addition and multiplication facts in elementary school children. Journal of Experimental Child Psychology, 57, 224-258. Lewis, C. (2002). Lesson study: A handbook of teacher-led instructional change. Philadelphia, PA: Research for Better Schools.
OCR for page 253
How Students Learn: History, Mathematics, and Science in the Classroom Lo Cicero, A., Fuson, K.C., and Allexaht-Snider, M. (1999). Making a difference in Latino children’s math learning: Listening to children, mathematizing their stories, and supporting parents to help children. In L. Ortiz-Franco, N.G. Hernendez, and Y. De La Cruz (Eds.), Changing the faces of mathematics: Perspectives on Latinos (pp. 59-70). Reston, VA: National Council of Teachers of Mathematics. McClain, K., Cobb, P., and Bowers, J. (1998). A contextual investigation of three-digit addition and subtraction. In L. Morrow (Ed.), Teaching and learning of algorithms in school mathematics (pp. 141-150). Reston, VA: National Council of Teachers of Mathematics. McKnight, C.C., and Schmidt, W.H. (1998). Facing facts in U.S. science and mathematics education: Where we stand, where we want to go. Journal of Science Education and Technology, 7(1), 57-76. McKnight, C.C., Crosswhite, F.J., Dossey, J.A., Kifer, E., Swafford, J.O., Travers, K.T., and Cooney, T.J. (1989). The underachieving curriculum: Assessing U.S. school mathematics from an international perspective. Champaign, IL: Stipes. Miller, K.F., and Paredes, D.R. (1990). Starting to add worse: Effects of learning to multiply on children’s addition. Cognition, 37, 213-242. Moss, J., and Case, R. (1999). Developing children’s understanding of rational numbers: A new model and experimental curriculum. Journal for Research in Mathematics Education, 30(2), 122-147. Mulligan, J., and Mitchelmore, M. (1997). Young children’s intuitive models of multiplication and division. Journal for Research in Mathematics Education, 28(3), 309-330. National Council of Teachers of Mathematics. (1989). Curriculum and evaluation standards for school mathematics. Reston, VA: National Council of Teachers of Mathematics. National Council of Teachers of Mathematics. (1991). Professional standards for teaching mathematics. Reston, VA: National Council of Teachers of Mathematics. National Council of Teachers of Mathematics. (2000). Principles and standards for school mathematics. Reston, VA: National Council of Teachers of Mathematics. National Research Council. (2001). Adding it up: Helping children learn mathematics. Mathematics Learning Study Committee, J. Kilpatrick, J. Swafford, and B. Findell (Eds.). Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. National Research Council. (2002). Helping children learn mathematics. Mathematics Learning Study Committee, J. Kilpatrick, J. Swafford, and B. Findell (Eds.). Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press. National Research Council. (2004). Learning and instruction: A SERP research agenda. Panel on Learning and Instruction. M.S. Donovan and J.W. Pellegrino (Eds.). Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press. Nesh, P., and Kilpatrick, J. (Eds.). (1990). Mathematics and cognition: A research synthesis by the International Group for the Psychology of Mathematics Education. Cambridge, MA: Cambridge University Press.
OCR for page 254
How Students Learn: History, Mathematics, and Science in the Classroom Nesher, P. (1992). Solving multiplication word problems. In G. Leinhardt, R.T. Putnam, and R.A. Hattrup (Eds.), The analysis of arithmetic for mathematics teaching (pp. 189-220). Mahwah, NJ: Lawrence Erlbaum Associates. Peak, L. (1996). Pursuing excellence: A study of the U.S. eighth-grade mathematics and science teaching, learning, curriculum, and achievement in an international context. Washington, DC: National Center for Education Statistics. Remillard, J.T. (1999). Curriculum materials in mathematics education reform: A framework for examining teachers’ curriculum development. Curriculum Inquiry, 29(3), 315-342. Remillard, J.T. (2000). Can curriculum materials support teachers’ learning? Elementary School Journal, 100(4), 331-350. Remillard, J.T., and Geist, P. (2002). Supporting teachers’ professional learning though navigating openings in the curriculum. Journal of Mathematics Teacher Education, 5(1), 7-34. Resnick, L.B. (1987). Education and learning to think. Committee on Mathematics, Science, and Technology Education, Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. Resnick, L.B. (1992). From protoquantities to operators: Building mathematical competence on a foundation of everyday knowledge. In G. Leinhardt, R.T. Putnam, and R.A. Hattrup (Eds.), The analysis of arithmetic for mathematics teaching (pp. 373-429). Mahwah, NJ: Lawrence Erlbaum Associates. Resnick, L.B., and Omanson, S.F. (1987). Learning to understand arithmetic. In R. Glaser (Ed.), Advances in instructional psychology (vol. 3, pp. 41-95). Mahwah, NJ: Lawrence Erlbaum Associates. Resnick, L.B., Nesher, P., Leonard, F., Magone, M., Omanson, S., and Peled, I. (1989). Conceptual bases of arithmetic errors: The case of decimal fractions. Journal for Research in Mathematics Education, 20(1), 8-27. Ron, P. (1998). My family taught me this way. In L.J. Morrow and M.J. Kenney (Eds.), The teaching and learning of algorithms in school mathematics (pp. 115-119). Reston, VA: National Council of Teachers of Mathematics. Saxe, G.B. (1982). Culture and the development of numerical cognition: Studies among the Oksapmin of Papua New Guinea. In C.J. Brainerd (Ed.), Progress in cognitive development research: Children’s logical and mathematical cognition (vol. 1, pp. 157- 176). New York: Springer-Verlag. Schmidt, W., McKnight, C.C., and Raizen, S.A. (1997). A splintered vision: An investigation of U.S. science and mathematics education. Dordrecht, The Netherlands: Kluwer. Schwartz, D.L., and Moore, J.L. (1998). The role of mathematics in explaining the material world: Mental models for proportional reasoning. Cognitive Science, 22, 471-516. Secada, W.G. (1992). Race, ethnicity, social class, language, and achievement in mathematics. In D. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 623-660). New York: Macmillan. Sherin, M.G. (2000a). Facilitating meaningful discussions about mathematics. Mathematics Teaching in the Middle School, 6(2), 186-190. Sherin, M.G. (2000b). Taking a fresh look at teaching through video clubs. Educational Leadership, 57(8), 36-38.
OCR for page 255
How Students Learn: History, Mathematics, and Science in the Classroom Sherin, M.G. (2001). Developing a professional vision of classroom events. In T. Wood, B.S. Nelson, and J. Warfield (Eds.), Beyond classical pedagogy: Teaching elementary school mathematics (pp. 75-93). Mahwah, NJ: Lawrence Erlbaum Associates. Sherin, M.G. (2002). A balancing act: Developing a discourse community in a mathematics classroom. Journal of Mathematics Teacher Education, 5, 205-233. Shuell, T.J. (2001). Teaching and learning in a classroom context. In N.J. Smelser and P.B. Baltes (Eds.), International encyclopedia of the social and behavioral sciences (pp. 15468-15472). Amsterdam: Elsevier. Siegler, R.S. (1988). Individual differences in strategy choices: Good students, not-so-good students, and perfectionists. Child Development, 59(4), 833-851. Siegler, R.S. (2003). Implications of cognitive science research for mathematics education. In J. Kilpatrick, W.G. Martin, and D.E. Schifter (Eds.), A research companion to principles and standards for school mathematics (pp. 1289-1303). Reston, VA: National Council of Teachers of Mathematics. Simon, M.A. (1995). Reconstructing mathematics pedagogy from a constructivist perspective. Journal for Research in Mathematics Education, 26, 114-145. Starkey, P., Spelke, E.S., and Gelman, R. (1990). Numerical abstraction by human infants. Cognition, 36, 97-127. Steffe, L.P. (1994). Children’s multiplying schemes. In G. Harel and J. Confrey (Eds.), The development of multiplicative reasoning in the learning of mathematics (pp. 3-39). New York: State University of New York Press. Steffe, L.P., Cobb, P., and Von Glasersfeld, E. (1988). Construction of arithmetical meanings and strategies. New York: Springer-Verlag. Sternberg, R.J. (1999). The theory of successful intelligence. Review of General Psychology, 3(4), 292-316. Stigler, J.W., and Hiebert, J. (1999). Teaching gap. New York: Free Press. Stigler, J.W., Fuson, K.C., Ham, M., and Kim, M.S. (1986). An analysis of addition and subtraction word problems in American and Soviet elementary mathematics textbooks. Cognition and Instruction, 3(3), 153-171. Stipek, D., Salmon, J.M., Givvin, K.B., Kazemi, E., Saxe, G., and MacGyvers, V.L. (1998). The value (and convergence) of practices suggested by motivation research and promoted by mathematics education reformers. Journal for Research in Mathematics Education, 29, 465-488. Thornton, C.A. (1978). Emphasizing thinking in basic fact instruction. Journal for Research in Mathematics Education, 9, 214-227. Thornton, C.A., Jones, G.A., and Toohey, M.A. (1983). A multisensory approach to thinking strategies for remedial instruction in basic addition facts. Journal for Research in Mathematics Education, 14(3), 198-203. Tobias, S. (1978). Overcoming math anxiety. New York: W.W. Norton. Van de Walle, J.A. (1998). Elementary and middle school mathematics: Teaching developmentally, third edition. New York: Longman. Van de Walle, J.A. (2000). Elementary school mathematics: Teaching developmentally, fourth edition. New York: Longman. Wynn, K. (1996). Infants’ individuation and enumeration of actions. Psychological Science, 7, 164-169.
OCR for page 256
How Students Learn: History, Mathematics, and Science in the Classroom Zucker, A.A. (1995). Emphasizing conceptual understanding and breadth of study in mathematics instruction. In M.S. Knapp (Ed.), Teaching for meaning in high-poverty classrooms. New York: Teachers College Press. SUGGESTED READING LIST FOR TEACHERS Carpenter, T.P. Fennema, E., Franke, M.L., Empson, S.B., and Levi, L.W. (1999). Children’s mathematics: Cognitively guided instruction. Portsmouth, NH: Heinemann. Fuson, K.C. (1988). Subtracting by counting up with finger patterns. (Invited paper for the Research into Practice Series.) Arithmetic Teacher, 35(5), 29-31. Hiebert, J., Carpenter, T., Fennema, E., Fuson, K.C., Wearne, D., Murray, H., Olivier, A., and Human, P. (1997). Making sense: Teaching and learning mathematics with understanding. Portsmouth, NH: Heinemann. Jensen, R.J. (Ed.). (1993). Research ideas for the classroom: Early childhood mathematics. New York: Macmillan. Knapp, M.S. (1995). Teaching for meaning in high-poverty classrooms. New York: Teachers College Press. Leinhardt, G., Putnam, R.T., and Hattrup, R.A. (Eds.). (1992). The analysis of arithmetic for mathematics teaching. Mahwah, NJ: Lawrence Erlbaum Associates. Lo Cicero, A., De La Cruz, Y., and Fuson, K.C. (1999). Teaching and learning creatively with the Children’s Math Worlds Curriculum: Using children’s narratives and explanations to co-create understandings. Teaching Children Mathematics, 5(9), 544-547. Owens, D.T. (Ed.). (1993). Research ideas for the classroom: Middle grades mathematics. New York: Macmillan. Schifter, D. (Ed.). (1996). What’s happening in math class? Envisioning new practices through teacher narratives. New York: Teachers College Press. Wagner, S. (Ed.). (1993). Research ideas for the classroom: High school mathematics. New York: Macmillan.
Representative terms from entire chapter: | http://www.nap.edu/openbook.php?record_id=10126&page=215 | 13 |
64 | Particle Size and Settling Rate
Country: United States
Date: March 2008
I have learned in Earth Science that larger, more dense, spherical
particles settle first in still water. However, I was wondering why this
happens? How do density, size, and shape of an object affect its settling
It is a matter of competing forces. The force that is pulling the particle
down is gravity, f = m*a. As the particle gets larger and denser, m (mass)
increases. The opposing force is the friction of the water, which also increases
with the size of the particle and its shape (with more friction as its surface
area increases) .
So for example, with two particles of the same mass and density, the one with
the larger surface area (thus more friction) will settle slower.
Two particles of the same size and shape, but different density, the one with
the higher density (more mass) will settle faster.
There are various other permutations, although it is harder to know without
calculations or experimentation which will settle faster if you vary more than
one characteristic at a time.
You may have heard that the two objects fall at the same rate, that the speed
at which they fall is only dependent on the gravity (which is the same if they
are at the same distance from the center of the Earth) and this may have led you
to think that it will be true also for settling objects in fluids. This is no
longer true because a fluid like water may affect falling (or settling) rates
unlike a fluid like air. Water, being more dense then air, can have a buoyant
effect on objects. For example a piece of wood may float in water, but a similarly
shaped piece of metal will sink. On the other hand, a similar mass of metal that
is shaped like a hollow sphere, may float on water because the amount of displaced
water has a mass that could be greater than the metal object. So, unlike air which
has a small buoyant effect (the mass of air displaced by an object is small
relative to the mass of the object), water, being denser than air, will have a
much bigger displaced mass, and have a stronger buoyant effect.
Thus, objects will fall at different rates in a denser fluid like water because
the buoyant effect on the objects will be different.
Greg (Roberto Gregorius)
Spherical particles may settle more rapidly because their
smaller surface area (than say an irregularly shaped particle)
causes less frictional drag force. Thus they can settle fastest,
assuming that all other factors (size and mass) other than shape
and irregularity of particles are held the same.
More dense (larger mass per unit volume) particles are heavier and thus
for the same size particle (heavy versus light) gravity will
act more on the denser particle, making it settle faster.
Larger particles, having more mass than smaller particles (assuming
that their density is the same) will settle faster because gravity
has greater affect on them.
Of course, we are talking about "larger" particles here with greater
density than water, as very small particles (especially if they have
the same density as water) can become suspended in a fluid. Particles
with less density than the fluid should float.
Kim's question could make it a bit confusing, as she said
"larger, denser, spherical".
The answer could be just "yes", but I wanted to explain further.
Frictional forces exerted by the medium through which the
particle is falling cause the fall rate to slow and eventually
reach the terminal velocity. The fall rate therefore depends
on mass, size, and shape of the object, as well as the density
of the medium. I did not want to confuse Kim with terminal
If two particles have the same diameter, the one with greater density
(defined as mass per unit volume), and thus greatest mass, will fall
fastest because gravity will act more upon it.
If two particles have the same density, but different
diameters (and thus different mass), it may seem as though
the one with more mass may fall faster, but that one also
experiences greater frictional forces imposed on it by
the medium. Equations would have to be used to determine
which would fall fastest.
If two particles had the same mass and density, the one that is
more streamlined (spherical) would fall faster than one that is
irregularly shaped, because frictional forces in the medium
would act more upon the irregularly shaped particle, thereby
slowing it's fall more.
In a way, Galileo's experiment was flawed. There was not sufficient
distance during the fall for weak air frictional forces to have much
of an effect on the two falling objects. If they had been differently
shaped, like a feather and something round of the same mass as the
feather, or if the experiment had been performed from the top of the
Empire State Building or Sears Tower, the results would have been
different. The fall rate for two objects of the same mass but
different shapes is only the same in a vacuum, where frictional
forces can not act (which is not a real life situation).
David R. Cook
Climate Research Section
Environmental Science Division
Argonne National Laboratory
Oops, I somehow sent it before I finished. onward...
Again, you are right that the buoyancy force is equal in magnitude to the weight
of fluid displaced. I have a hard time getting that across to my students in one
mouthful, because in that tiny little phrase is contained the concepts of gravity,
density, and displacement. So I just stated that buoyancy is proportional to
volume, which is true as long as the object is completely submerged.
So I propose this as another iteration of the paragraph with your two comments:
"The settling rate is the speed at which the viscous drag acting on the settling
particle exactly opposes the downward force on the particle. That downward force
is the interaction between the downward force of gravity on the particle and the
upward buoyancy force. (I have grouped the forces in this way because the force
of gravity and the force of buoyancy are the same at any speed the particle might
move. The force of viscous drag, however, changes with speed.)
The larger the object is, and the denser it is, the greater the gravitational force
acting downward on it. The force of buoyancy on the object is how much the water
pushes it upward. The strength of this buoyancy force is proportional to the
volume of the object. (It is actually equal to the weight of the water displaced
by the particle, which is equal to the weight of a volume of water equal to the
volume of the particle.) If the particle is more dense than water, its
gravitational force exceeds its buoyancy force and the net force on the particle
is downward. If the particle is less dense than water, its buoyancy force is
stronger than the gravitational force and the net force on the particle is
Richard Barrans, Ph.D., M.Ed.
Department of Physics and Astronomy
University of Wyoming
Click here to return to the Environmental and Earth Science Archives
Update: June 2012 | http://www.newton.dep.anl.gov/askasci/env99/env99374.htm | 13 |
87 | The Flume Study
Too often our floodways are cleared of native vegetation for fear that trees and shrubs will block or redirect flood flows that may damage property. However, native vegetation can protect soil erosion and can be designed to have zero impact on the movement of flood waters. Engineers need quantitative data about the behavior of different kinds of riparian plants, in order to incorporate their use in floodways. In a recent study at the J. Amorocho Hydraulics Laboratory at UC Davis Large Flume (Kavvas and others 2009), multiple depths and velocities of flows were tested on four species of flexible stem riparian plants and for bare soil. Their results indicate that riparian vegetation can be beneficial to floodway designs.
Background of Flow Properties
The velocity of the flow in any stream or river is primarily determined by gravity operating through the slope of the channel – water runs faster down a steep slope than a gentle slope. Resistance to flow, or hydraulic roughness of the channel (the texture of the surface of the channel, banks, and floodplain due to type of vegetation structure, geomorphology and topography, and size of sediment) modifies the effect of gravity and slows the flow, resulting in the velocity that we see. Because water molecules are in a fluid state, literally anything in the channel can locally modify their velocity and redirect their flow for short distances. We call this hydraulic turbulence and it is seen as waves, and boils on smooth river water from upward oriented turbulence. Hydraulic turbulence is the mechanism of resistance to flow especially at higher velocities.
Flows in open river channels are turbulent, with vectors moving in all directions within the general slope induced flow. At any point in the channel or on the bank, the velocity is variable over time spans of seconds and minutes. This is caused by turbulence in the flow. As velocity increases so does the turbulence. On the bed of the channel, or on the surface of the floodplain, the turbulence dislodges sediment and entrains it into the flow. As velocity increases, the force of the turbulence increases as well, such that larger diameter sediment is kicked up and entrained. The turbulence in such a flow will keep particles as large as cobbles entrained for great distances, resulting in the placement of cobbles onto the floodplain that we see after a large flood.
Flume tests showed that at the higher velocities (4-6 feet per second, fps) the plants bent over with the flow, becoming more streamline as the flow velocity beneath the plant canopies dropped, while the flow velocity increased over the top of the canopies. Surface erosion of the soil was minimal under the plant canopies. However, the flume tests on bare soil with no plant canopies to protect the surface, resulted in dramatic increases in soil erosion off the surface at about 4 fps. As velocity increases over the bare soil, lift-forces cause soil particles to rise into the water column while hydraulic turbulence also increases in intensity, thereby kicking up soil particles and entraining them in the flow. Therefore, both increases in velocity and hydraulic turbulence cause greater erosion on bare soil.
Comparison of soil surface erosion depths under different flood flow velocities for four California native plant canopies and a bare soil bed. Used with permission from Kavvas and others 2009.
Distribution of velocity differences across channel-floodplain (x-section)
During a flood the velocity the floodway is not uniform. The velocity is fastest and deepest in the main channel. On the adjacent floodplain velocity is slower and variable as one proceeds along a transect. In fact, where eddies occur, the flow may be directed upstream for short distances. Eddies and low-flow backwaters are relatively slow velocity compared to other areas in the floodway. The slow and fast velocity areas are determined by the geomorphology of the floodway. Thus trees and brush growing in these low velocity areas have low, or no impact on the hydraulic roughness of the floodway. Hydraulic models of the flows within the floodway can tell us how fast and where the flow velocity is distributed.
If the velocity of flow changes at any point along a stream, effects are felt both upstream and down. Slowing the velocity will cause the flow upstream to “stack-up” on top of the slower water. This raises the elevation of the stream and could force water over the top of a levee. On the other hand, increasing the velocity at a point along a stream can lead to water stacking up downstream. From Knighton p 99: “At a given point on the bank, over time (seconds and minutes), velocity will naturally fluctuate, causing the elevation of the flow to fluctuate also.” The hydraulic model for the Feather River shows plus or minus 0.1 ft. accuracy; this is roughly the same variability that we observe while standing on the bank!
Flow depth and roughness
Flow depth greatly influences the effects of channel, floodplain, and vegetation roughness. Central Valley Rivers function as floodways with confined flows between levees that results in flow depths over the floodplain that are unnaturally deep – 15 to 20 feet, or more on the lower Feather River. Two phenomena are occurring at these great depths: 1. Under deep flows, proportionately less of the flow is in contact with the channel and floodplain, compared to low shallow flow. 2. The weight of the water column under deep flows exerts a tremendous force upon the channel, floodplain, and vegetation causing sediment erosion and pressing flexible-stemmed plants to the bottom. For example, one early sign of an unsafe levee is that erosion is taking place at its base where the hydraulic forces are greatest.
Plant stems and roughness and velocity - Flexible stems vs. rigid stems
Vegetation is composed of many different species of plants, each with its characteristic growth form and range of stem diameters. Thus, trees have a main trunk that is many inches in diameter, shrubs have many stems of smaller diameter, and herbaceous plants have stems less than an inch in diameter that wither and die at the end of each growing season. Each growth form is composed of stems of a specific range of diameters. Clearly, a single tree trunk that is many inches in diameter is not at all flexible. A tree truck will resist flows causing hydraulic roughness to remain the same or to increase as flows become faster and deeper. On the other hand, shrubs are composed of many stems that are of smaller diameter than a tree trunk. These shrub stems are more flexible and can bend as depth and velocity of flow increases. Vines have stems typically less than one inch in diameter and will bend even under low flow velocities.
As stream flows increase in depth with larger volumes of water compressed between two levees, the weight of the deeper water impacts the channel, the floodplain sediments and the vegetation growing on them. Flood flows can be 15 to 20 feet deep at maximum design discharge in the Feather River floodway (historically flows were much shallower when the river could spread across the natural floodplain during a flood.) The weight of the deep water is more effective than shallow water at mobilizing sediments and it will press down to the bed flexible stems of vegetation. Tall, non-flexible trees and shrubs may stand upright during a deep flood. However, flexible shrubs (rose, blackberry, sandbar willow, etc.) will be pressed to the bottom. This fact/phenomenon has recently been quantified by the flume study at UC Davis.
The Flume Study quantifies the impacts of flexible stem plants upon hydraulic roughness and bed erosion. Plant growth forms composed of small diameter stem – less than one inch in diameter; rose, blackberry, sandbar willow, mulefat – will bend under the force of moving water at relatively slow velocities ( 2- 5 fps). Direct measurements of velocity above, below and within the plant canopies in the flume reveals important characteristics of the bendability of stems and their impacts upon hydraulic roughness and bed erosion.
Kavvas and others 2009 showed quantitatively the velocities that 4 species of riparian plants bend over in response to flows. They also showed that water velocity slightly increases above the plants and decreases underneath the plants. The slower velocity of water beneath the plants decreases soil erosion.
As depth and velocity increase, flexible stemmed plants bend with the flow and hydraulic roughness (Manning’s n) decrease as the flow passes over them. Measurements of velocity in the flume show that velocity increases over the prostrate plant stems. Under the plant stems, velocity decreases by over half, thereby protecting the bed from sediment erosion.
Vertical distributions of flow velocity averaged over the three replicate wild rose canopies. The blue diamonds reflect the velocity profile as the flow entered the wild rose canopy. The red squares represent the velocity profile 18 feet into the canopy, and the yellow triangles represent the velocity profile at the downstream end of the wild rose canopy.Used with permission from Kavvas and others 2009.
Velocity Profiles for Sandbar willow. The blue diamonds represent the velocity profile as the flow first enters the canopy of the Sandbar willow; the red squares represent the velocity profile after the flow has traveled 18 feet into the canopy; and the yellow triangles represent the velocity profile as the flow leaves the Sandbar will canopy. Used with permission from Kavvas and others 2009.
Vegetation can direct flows
When a dense stand of non-flexible trees or shrubs grow adjacent to an open area composed of only herbaceous plants, flows will deflect off the dense stand and into the open area of low hydraulic roughness. This phenomenon can be used to protect structures and focus sediment transport, as at O’Connor Lakes (see discussion below).
Engineers have several mathematical formulae to describe flow velocity. Resistance to flow by the channel boundary (bed and banks and vegetation) is usually described by a roughness coefficient in a formula. The Manning’s equation is the equation used by flood control engineers in California:
v = 1.49((R 2/3 s ½)/n)
where: v = velocity; R= hydraulic radius (R= A/p, where A= Channel cross-section area, and p= wetted perimeter of channel); s=slope of energy gradient; n=resistance (or roughness) coefficient.
(These parameters are all typically averaged across a stream cross-section. Recently, two-dimensional hydraulic models have been developed that sub-divide the channel cross-section into smaller, more realistic units. See Hydraulic Modeling).
In the formula, n is a coefficient that describes the resistance to flow as a function of flow velocity (v) and depth (R) of flow. The Manning’s equation describes the interactions of river flow velocity, flow depth, and channel slope and roughness. Thus, as depth increases, roughness (n) will decrease and velocity will increase because less of the flow is in contact with the perimeter (channel and floodplain).
The absolute value of n can be used to describe the flow resistance caused by different vegetation structures. For example, a grove of dense trees with relatively large diameter, non-flexible stems and trunks will resist flow to a much greater magnitude than a stand of flexible stemmed sandbar willows covering the same area. In this example the grove of trees might have a roughness coefficient of n=.07, while the more flexible sandbar willows might have a roughness coefficient of n=.05. The O’Connor Lakes hydraulic modeling exercise that resulted in the final restoration planting design is a good case study that demonstrates how Manning’s n was used to create the restoration planting design.
How the flume is not a river
Flume results are presented with Manning’s n as a function of Reynolds number (Reynolds number = velocity x depth/viscosity of water). The flume is much shallower than a river, hence Reynolds number will be even larger in the real world. In the real world, flood flows are much deeper – 14 feet deep in the Feather River – compared to the flume – 5 feet. Referring to the figure below, we see that the canopy Manning’s n for all species are decreasing and about to intersect with the rising Manning’s n for bare soil. Increasing the axes of this figure out to the Reynolds number that would occur at 14 feet deep, we see that the vegetation canopies would have a minimal impact on hydraulic roughness because they would likely be pressed to the bed.
Manning’s roughness coefficients as a function of Reynolds number under various California native riparian vegetation canopy conditions.Used with permission from Kavvas and others 2009. | http://riverpartners.org/resources/riparian-ecology/veg-floodway/the-flume.html | 13 |
56 | An open cluster is a group of up to a few thousand stars that were formed from the same giant molecular cloud and have roughly the same age. More than 1,100 open clusters have been discovered within the Milky Way Galaxy, and many more are thought to exist. They are loosely bound to each other by mutual gravitational attraction and become disrupted by close encounters with other clusters and clouds of gas as they orbit the galactic center, resulting in a migration to the main body of the galaxy as well as a loss of cluster members through internal close encounters. Open clusters generally survive for a few hundred million years. In contrast, the more massive globular clusters of stars exert a stronger gravitational attraction on their members, and can survive for many billions of years. Open clusters have been found only in spiral and irregular galaxies, in which active star formation is occurring.
Young open clusters may still be contained within the molecular cloud from which they formed, illuminating it to create an H II region. Over time, radiation pressure from the cluster will disperse the molecular cloud. Typically, about 10% of the mass of a gas cloud will coalesce into stars before radiation pressure drives the rest of the gas away.
Open clusters are key objects in the study of stellar evolution. Because the cluster members are of similar age and chemical composition, the effects of other stellar properties are more easily determined than they are for isolated stars. A number of open clusters, such as the Pleiades, Hyades or the Alpha Persei Cluster are visible with the naked eye. Some others, such as the Double Cluster, are barely perceptible without instruments, while many more can be seen using binoculars or telescopes. The Wild Duck Cluster, M11, is an example.
Historical observations
The prominent open cluster Pleiades has been recognized as a group of stars since antiquity, while the Hyades forms part of Taurus, one of the oldest constellations. Other open clusters were noted by early astronomers as unresolved fuzzy patches of light. The Roman astronomer Ptolemy mentions the Praesepe, the Double Cluster in Perseus, and the Ptolemy Cluster, while the Persian astronomer Al-Sufi wrote of the Omicron Velorum cluster. However, it would require the invention of the telescope to resolve these nebulae into their constituent stars. Indeed, in 1603 Johann Bayer gave three of these clusters designations as if they were single stars.
The first person to use a telescope to observe the night sky and record his observations was the Italian scientist Galileo Galilei in 1609. When he turned the telescope toward some of the nebulous patches recorded by Ptolemy, he found they were not a single star, but groupings of many stars. For the Praesepe, he found more than 40 stars. Where previously observers had noted only 6-7 stars in the Pleiades, he found almost 50. In his 1610 treatise Sidereus Nuncius, Galileo Galilei wrote, "the galaxy is nothing else but a mass of innumerable stars planted together in clusters." Influenced by Galileo's work, the Sicilian astronomer Giovanni Hodierna became possibly the first astronomer to use a telescope to find previously undiscovered open clusters. In 1654, he identified the objects now designated Messier 41, Messier 47, NGC 2362 and NGC 2451.
It was realised as early as 1767 that the stars in a clusters were physically related, when the English naturalist Reverend John Michell calculated that the probability of even just one group of stars like the Pleiades being the result of a chance alignment as seen from Earth was just 1 in 496,000. Between 1774–1781, French astronomer Charles Messier published a catalogue of celestial objects that had a nebulous appearance similar to comets. This catalogue included 26 open clusters. In the 1790s, English astronomer William Herschel began an extensive study of nebulous celestial objects. He discovered that many of these features could be resolved into groupings of individual stars. Herschel conceived the idea that stars were initially scattered across space, but later became clustered together as star systems because of gravitational attraction. He divided the nebulae into eight classes, with classes VI through VIII being used to classify clusters of stars.
The number of clusters known continued to increase under the efforts of astronomers. Hundreds of open clusters were listed in the New General Catalogue, first published in 1888 by the Danish-Irish astronomer J. L. E. Dreyer, and the two supplemental Index Catalogues, published in 1896 and 1905. Telescopic observations revealed two distinct types of clusters, one of which contained thousands of stars in a regular spherical distribution and was found all across the sky but preferentially towards the centre of the Milky Way. The other type consisted of a generally sparser population of stars in a more irregular shape. These were generally found in or near the galactic plane of the Milky Way. Astronomers dubbed the former globular clusters, and the latter open clusters. Because of their location, open clusters are occasionally referred to as galactic clusters, a term that was introduced in 1925 by the Swiss-American astronomer Robert Julius Trumpler.
Micrometer measurements of the positions of stars in clusters were made as early as 1877 by the German astronomer E. Schönfeld and further pursued by the American astronomer E. E. Barnard prior to his death in 1923. No indication of stellar motion was detected by these efforts. However, in 1918 the Dutch-American astronomer Adriaan van Maanen was able to measure the proper motion of stars in part of the Pleiades cluster by comparing photographic plates taken at different times. As astrometry became more accurate, cluster stars were found to share a common proper motion through space. By comparing the photographic plates of the Pleiades cluster taken in 1918 with images taken in 1943, van Maanen was able to identify those stars that had a proper motion similar to the mean motion of the cluster, and were therefore more likely to be members. Spectroscopic measurements revealed common radial velocities, thus showing that the clusters consist of stars bound together as a group.
The first color-magnitude diagrams of open clusters were published by Ejnar Hertzsprung in 1911, giving the plot for the Pleiades and Hyades star clusters. He continued this work on open clusters for the next twenty years. From spectroscopic data, he was able to determine the upper limit of internal motions for open clusters, and could estimate that the total mass of these objects did not exceed several hundred times the mass of the Sun. He demonstrated a relationship between the star colors and their magnitudes, and in 1929 noticed that the Hyades and Praesepe clusters had different stellar populations than the Pleiades. This would subsequently be interpreted as a difference in ages of the three clusters.
The formation of an open cluster begins with the collapse of part of a giant molecular cloud, a cold dense cloud of gas and dust containing up to many thousands of times the mass of the Sun. These clouds have densities that vary from 102 to 106 molecules of neutral hydrogen per cm3, with star formation occurring in regions with densities above 104 molecules per cm3. Typically, only 1–10% of the cloud by volume is above the latter density. Prior to collapse, these clouds maintain their mechanical equilibrium through magnetic fields, turbulence, and rotation.
Many factors may disrupt the equilibrium of a giant molecular cloud, triggering a collapse and initiating the burst of star formation that can result in an open cluster. These include shock waves from a nearby supernova, collisions with other clouds, or gravitational interactions. Even without external triggers, regions of the cloud can reach conditions where they become unstable against collapse. The collapsing cloud region will undergo hierarchical fragmentation into ever smaller clumps, including a particularly dense form known as infrared dark clouds, eventually leading to the formation of up to several thousand stars. This star formation begins enshrouded in the collapsing cloud, blocking the protostars from sight but allowing infrared observation. In the Milky Way galaxy, the formation rate of open clusters is estimated to be one every few thousand years.
The hottest and most massive of the newly formed stars (known as OB stars) will emit intense ultraviolet radiation, which steadily ionizes the surrounding gas of the giant molecular cloud, forming an H II region. Stellar winds and radiation pressure from the massive stars begins to drive away the hot ionized gas at a velocity matching the speed of sound in the gas. After a few million years the cluster will experience its first core-collapse supernovae, which will also expel gas from the vicinity. In most cases these processes will strip the cluster of gas within ten million years and no further star formation will take place. Still, about half of the resulting protostellar objects will be left surrounded by circumstellar disks, many of which form accretion disks.
As only 30 to 40 per cent of the gas in the cloud core forms stars, the process of residual gas expulsion is highly damaging to the star formation process. All clusters thus suffer significant infant weight loss, while a large fraction undergo infant mortality. At this point, the formation of an open cluster will depend on whether the newly formed stars are gravitationally bound to each other; otherwise an unbound stellar association will result. Even when a cluster such as the Pleiades does form, it may only hold on to a third of the original stars, with the remainder becoming unbound once the gas is expelled. The young stars so released from their natal cluster become part of the Galactic field population.
Because most if not all stars form clustered, star clusters are to be viewed the fundamental building blocks of galaxies. The violent gas-expulsion events that shape and destroy many star clusters at birth leave their imprint in the morphological and kinematical structures of galaxies. Most open clusters form with at least 100 stars and a mass of 50 or more solar masses. The largest clusters can have 104 solar masses, with the massive cluster Westerlund 1 being estimated at 5 × 104 solar masses; close to that of a globular cluster. While open clusters and globular clusters form two fairly distinct groups, there may not be a great deal of difference in appearance between a very sparse globular cluster and a very rich open cluster. Some astronomers believe the two types of star clusters form via the same basic mechanism, with the difference being that the conditions that allowed the formation of the very rich globular clusters containing hundreds of thousands of stars no longer prevail in the Milky Way.
It is common for two or more separate open clusters to form out of the same molecular cloud. In the Large Magellanic Cloud, both Hodge 301 and R136 are forming from the gases of the Tarantula Nebula, while in our own galaxy, tracing back the motion through space of the Hyades and Praesepe, two prominent nearby open clusters, suggests that they formed in the same cloud about 600 million years ago. Sometimes, two clusters born at the same time will form a binary cluster. The best known example in the Milky Way is the Double Cluster of NGC 869 and NGC 884 (sometimes mistakenly called h and χ Persei; h refers to a neighboring star and χ to both clusters), but at least 10 more double clusters are known to exist. Many more are known in the Small and Large Magellanic Clouds—they are easier to detect in external systems than in our own galaxy because projection effects can cause unrelated clusters within the Milky Way to appear close to each other.
Morphology and classification
Open clusters range from very sparse clusters with only a few members to large agglomerations containing thousands of stars. They usually consist of quite a distinct dense core, surrounded by a more diffuse 'corona' of cluster members. The core is typically about 3–4 light years across, with the corona extending to about 20 light years from the cluster centre. Typical star densities in the centre of a cluster are about 1.5 stars per cubic light year; the stellar density near the sun is about 0.003 stars per cubic light year.
Open clusters are often classified according to a scheme developed by Robert Trumpler in 1930. The Trumpler scheme gives a cluster a three part designation, with a Roman numeral from I-IV indicating its concentration and detachment from the surrounding star field (from strongly to weakly concentrated), an Arabic numeral from 1 to 3 indicating the range in brightness of members (from small to large range), and p, m or r to indication whether the cluster is poor, medium or rich in stars. An 'n' is appended if the cluster lies within nebulosity.
Under the Trumpler scheme, the Pleiades are classified as I3rn (strongly concentrated and richly populated with nebulosity present), while the nearby Hyades are classified as II3m (more dispersed, and with fewer members).
Numbers and distribution
There are over 1,000 known open clusters in our galaxy, but the true total may be up to ten times higher than that. In spiral galaxies, open clusters are largely found in the spiral arms where gas densities are highest and so most star formation occurs, and clusters usually disperse before they have had time to travel beyond their spiral arm. Open clusters are strongly concentrated close to the galactic plane, with a scale height in our galaxy of about 180 light years, compared to a galactic radius of approximately 100,000 light years.
In irregular galaxies, open clusters may be found throughout the galaxy, although their concentration is highest where the gas density is highest. Open clusters are not seen in elliptical galaxies: star formation ceased many millions of years ago in ellipticals, and so the open clusters which were originally present have long since dispersed.
In our galaxy, the distribution of clusters depends on age, with older clusters being preferentially found at greater distances from the galactic centre, generally at substantial distances above or below the galactic plane. Tidal forces are stronger nearer the centre of the galaxy, increasing the rate of disruption of clusters, and also the giant molecular clouds which cause the disruption of clusters are concentrated towards the inner regions of the galaxy, so clusters in the inner regions of the galaxy tend to get dispersed at a younger age than their counterparts in the outer regions.
Stellar composition
Because open clusters tend to be dispersed before most of their stars reach the end of their lives, the light from them tends to be dominated by the young, hot blue stars. These stars are the most massive, and have the shortest lives of a few tens of millions of years. The older open clusters tend to contain more yellow stars.
Some open clusters contain hot blue stars which seem to be much younger than the rest of the cluster. These blue stragglers are also observed in globular clusters, and in the very dense cores of globulars they are believed to arise when stars collide, forming a much hotter, more massive star. However, the stellar density in open clusters is much lower than that in globular clusters, and stellar collisions cannot explain the numbers of blue stragglers observed. Instead, it is thought that most of them probably originate when dynamical interactions with other stars cause a binary system to coalesce into one star.
Once they have exhausted their supply of hydrogen through nuclear fusion, medium to low mass stars shed their outer layers to form a planetary nebula and evolve into white dwarfs. While most clusters become dispersed before a large proportion of their members have reached the white dwarf stage, the number of white dwarfs in open clusters is still generally much lower than would be expected, given the age of the cluster and the expected initial mass distribution of the stars. One possible explanation for the lack of white dwarfs is that when a red giant expels its outer layers to become a planetary nebula, a slight asymmetry in the loss of material could give the star a 'kick' of a few kilometres per second, enough to eject it from the cluster.
Because of their high density, close encounters between stars in an open cluster are common. For a typical cluster with 1,000 stars with a 0.5 parsec half-mass radius, on average a star will have an encounter with another member every 10 million years. The rate is even higher in denser clusters. These encounters can have a significant impact on the extended circumstellar disks of material that surround many young stars. Tidal perturbations of large disks may result in the formation of massive planets and brown dwarfs, producing companions at distances of 100 AU or more from the host star.
Eventual fate
Many open clusters are inherently unstable, with a small enough mass that the escape velocity of the system is lower than the average velocity of the constituent stars. These clusters will rapidly disperse within a few million years. In many cases, the stripping away of the gas from which the cluster formed by the radiation pressure of the hot young stars reduces the cluster mass enough to allow rapid dispersal.
Clusters which have enough mass to be gravitationally bound once the surrounding nebula has evaporated can remain distinct for many tens of millions of years, but over time internal and external processes tend also to disperse them. Internally, close encounters between stars can increase the velocity of a member beyond the escape velocity of the cluster. This results in the gradual 'evaporation' of cluster members.
Externally, about every half-billion years or so an open cluster tends to be disturbed by external factors such as passing close to or through a molecular cloud. The gravitational tidal forces generated by such an encounter tend to disrupt the cluster. Eventually, the cluster becomes a stream of stars, not close enough to be a cluster but all related and moving in similar directions at similar speeds. The timescale over which a cluster disrupts depends on its initial stellar density, with more tightly packed clusters persisting for longer. Estimated cluster half lives, after which half the original cluster members will have been lost, range from 150–800 million years, depending on the original density.
After a cluster has become gravitationally unbound, many of its constituent stars will still be moving through space on similar trajectories, in what is known as a stellar association, moving cluster, or moving group. Several of the brightest stars in the 'Plough' of Ursa Major are former members of an open cluster which now form such an association, in this case, the Ursa Major moving group. Eventually their slightly different relative velocities will see them scattered throughout the galaxy. A larger cluster is then known as a stream, if we discover the similar velocities and ages of otherwise unrelated stars.
Studying stellar evolution
When a Hertzsprung-Russell diagram is plotted for an open cluster, most stars lie on the main sequence. The most massive stars have begun to evolve away from the main sequence and are becoming red giants; the position of the turn-off from the main sequence can be used to estimate the age of the cluster.
Because the stars in an open cluster are all at roughly the same distance from Earth, and were born at roughly the same time from the same raw material, the differences in apparent brightness among cluster members is due only to their mass. This makes open clusters very useful in the study of stellar evolution, because when comparing one star to another, many of the variable parameters are fixed.
The study of the abundances of lithium and beryllium in open cluster stars can give important clues about the evolution of stars and their interior structures. While hydrogen nuclei cannot fuse to form helium until the temperature reaches about 10 million K, lithium and beryllium are destroyed at temperatures of 2.5 million K and 3.5 million K respectively. This means that their abundances depend strongly on how much mixing occurs in stellar interiors. By studying their abundances in open cluster stars, variables such as age and chemical composition are fixed.
Studies have shown that the abundances of these light elements are much lower than models of stellar evolution predict. While the reason for this underabundance is not yet fully understood, one possibility is that convection in stellar interiors can 'overshoot' into regions where radiation is normally the dominant mode of energy transport.
Astronomical distance scale
Determining the distances to astronomical objects is crucial to understanding them, but the vast majority of objects are too far away for their distances to be directly determined. Calibration of the astronomical distance scale relies on a sequence of indirect and sometimes uncertain measurements relating the closest objects, for which distances can be directly measured, to increasingly distant objects. Open clusters are a crucial step in this sequence.
The closest open clusters can have their distance measured directly by one of two methods. First, the parallax (the small change in apparent position over the course of a year caused by the Earth moving from one side of its orbit around the Sun to the other) of stars in close open clusters can be measured, like other individual stars. Clusters such as the Pleiades, Hyades and a few others within about 500 light years are close enough for this method to be viable, and results from the Hipparcos position-measuring satellite yielded accurate distances for several clusters.
The other direct method is the so-called moving cluster method. This relies on the fact that the stars of a cluster share a common motion through space. Measuring the proper motions of cluster members and plotting their apparent motions across the sky will reveal that they converge on a vanishing point. The radial velocity of cluster members can be determined from Doppler shift measurements of their spectra, and once the radial velocity, proper motion and angular distance from the cluster to its vanishing point are known, simple trigonometry will reveal the distance to the cluster. The Hyades are the best known application of this method, which reveals their distance to be 46.3 parsecs.
Once the distances to nearby clusters have been established, further techniques can extend the distance scale to more distant clusters. By matching the main sequence on the Hertzsprung-Russell diagram for a cluster at a known distance with that of a more distant cluster, the distance to the more distant cluster can be estimated. The nearest open cluster is the Hyades: the stellar association consisting of most of the Plough stars is at about half the distance of the Hyades, but is a stellar association rather than an open cluster as the stars are not gravitationally bound to each other. The most distant known open cluster in our galaxy is Berkeley 29, at a distance of about 15,000 parsecs. Open clusters are also easily detected in many of the galaxies of the Local Group.
Accurate knowledge of open cluster distances is vital for calibrating the period-luminosity relationship shown by variable stars such as cepheid and RR Lyrae stars, which allows them to be used as standard candles. These luminous stars can be detected at great distances, and are then used to extend the distance scale to nearby galaxies in the Local Group.
See also
- Stellar associations
- Moving groups
- Open cluster family
- Open cluster remnant
- Star clusters
- List of open clusters
- Frommert, Hartmut; Kronberg, Christine (August 27, 2007). "Open Star Clusters". SEDS. University of Arizona, Lunar and Planetary Lab. Retrieved 2009-01-02.
- Karttunen, Hannu; et al. (2003). Fundamental astronomy. Physics and Astronomy Online Library (4 ed.). Springer. p. 321. ISBN 3-540-00179-4.
- Payne-Gaposchkin, C. (1979). Stars and clusters. Cambridge, Mass.: Harvard University Press. Bibcode:1979stcl.book.....P. ISBN 0-674-83440-2.
- A good example of this is NGC 2244, in the Rosette Nebula. See also Johnson, Harold L. (November 1962). "The Galactic Cluster, NGC 2244". Astrophysical Journal 136: 1135. Bibcode:1962ApJ...136.1135J. doi:10.1086/147466.
- Neata, Emil. "Open Star Clusters: Information and Observations". Night Sky Info. Retrieved 2009-01-02.
- "VISTA Finds 96 Star Clusters Hidden Behind Dust". ESO Science Release. Retrieved 3 August 2011.
- Moore, Patrick; Rees, Robin (2011), Patrick Moore's Data Book of Astronomy (2nd ed.), Cambridge University Press, p. 339, ISBN 0-521-89935-4
- Jones, Kenneth Glyn (1991). Messier's nebulae and star clusters. Practical astronomy handbook (2) (2nd ed.). Cambridge University Press. pp. 6–7. ISBN 0-521-37079-5.
- Kaler, James B. (2006). Cambridge Encyclopedia of Stars. Cambridge University Press. p. 167. ISBN 0-521-81803-6.
- Maran, Stephen P.; Marschall, Laurence A. (2009), Galileo's new universe: the revolution in our understanding of the cosmos, BenBella Books, p. 128, ISBN 1-933771-59-3
- D'Onofrio, Mauro; Burigana, Carlo. [Introduction Introduction] Check
|chapter-url=missing title (help). In Mauro D'Onofrio, Carlo Burigana. Questions of Modern Cosmology: Galileo's Legacy. Springer, 2009. p. 1. ISBN 3-642-00791-0.
- Fodera-Serio, G.; Indorato, L.; Nastasi, P. (February 1985), "Hodierna's Observations of Nebulae and his Cosmology", Journal for the History of Astronomy 16 (1): 1, Bibcode:1985JHA....16....1F
- Jones, K. G. (August 1986). "Some Notes on Hodierna's Nebulae". Journal of the History of Astronomy 17 (50): 187–188. Bibcode:1986JHA....17..187J.
- Chapman, A. (December 1989), "William Herschel and the Measurement of Space", Royal Astronomical Society Quarterly Journal 30 (4): 399–418, Bibcode:1989QJRAS..30..399C
- Michell, J. (1767). "An Inquiry into the probable Parallax, and Magnitude, of the Fixed Stars, from the Quantity of Light which they afford us, and the particular Circumstances of their Situation". Philosophical Transactions 57 (0): 234–264. Bibcode:1767RSPT...57..234M. doi:10.1098/rstl.1767.0028.
- Hoskin, M. (1979). "Herschel, William's Early Investigations of Nebulae - a Reassessment". Journal for the History of Astronomy 10: 165–176. Bibcode:1979JHA....10..165H.
- Hoskin, M. (February 1987). "Herschel's Cosmology". Journal of the History of Astronomy 18 (1): 1–34. Bibcode:1987JHA....18....1H. See page 20.
- Bok, Bart J.; Bok, Priscilla F. (1981). The Milky Way. Harvard books on astronomy (5th ed.). Harvard University Press. p. 136. ISBN 0-674-57503-2.
- Binney, James; Merrifield, Michael (1998), Galactic astronomy, Princeton series in astrophysics, Princeton University Press, p. 377, ISBN 0-691-02565-7
- Basu, Baidyanath (2003). An Introduction to Astrophysics. PHI Learning Pvt. Ltd. p. 218. ISBN 81-203-1121-3.
- Trumpler, R. J. (December 1925). "Spectral Types in Open Clusters". Publications of the Astronomical Society of the Pacific 37 (220): 307. Bibcode:1925PASP...37..307T. doi:10.1086/123509. Unknown parameter
- Barnard, E. E., "Micrometric measures of star clusters", Publications of the Yerkes Observatory 6: 1–106, Bibcode:1931PYerO...6....1B
- van Maanen, Adriaan (1919), "No. 167. Investigations on proper motion. Furst paper: The motions of 85 stars in the neighborhood of Atlas and Pleione", Contributions from the Mount Wilson Observatory (Carnegie Institution of Washington) 167: 1–15, Bibcode:1919CMWCI.167....1V
- van Maanen, Adriaan (July 1945), "Investigations on Proper Motion. XXIV. Further Measures in the Pleiades Cluster", Astrophysical Journal 102: 26–31, Bibcode:1945ApJ...102...26V, doi:10.1086/144736
- Strand, K. Aa. (December 1977), "Hertzsprung's Contributions to the HR Diagram", in Philip, A. G. Davis; DeVorkin, David H., The HR Diagram, In Memory of Henry Norris Russell, IAU Symposium No. 80, held November 2, 1977, National Academy of Sciences, Washington, DC, pp. 55–59, Bibcode:1977IAUS...80S..55S
- Lada, C. J. (January 2010), "The physics and modes of star cluster formation: observations", Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368 (1913): 713–731, arXiv:0911.0779, Bibcode:2010RSPTA.368..713L, doi:10.1098/rsta.2009.0264
- Shu, Frank H.; Adams, Fred C.; Lizano, Susana (1987), "Star formation in molecular clouds - Observation and theory", Annual review of astronomy and astrophysics 25: 23–81, Bibcode:1987ARA&A..25...23S, doi:10.1146/annurev.aa.25.090187.000323
- Battinelli, P.; Capuzzo-Dolcetta, R. (1991). "Formation and evolutionary properties of the Galactic open cluster system". Monthly Notices of the Royal Astronomical Society 249: 76–83. Bibcode:1991MNRAS.249...76B.
- Kroupa, Pavel; Aarseth, Sverre; Hurley, Jarrod (March 2001), "The formation of a bound star cluster: from the Orion nebula cluster to the Pleiades", Monthly Notices of the Royal Astronomical Society 321 (4): 699–712, arXiv:astro-ph/0009470, Bibcode:2001MNRAS.321..699K, doi:10.1046/j.1365-8711.2001.04050.x
- Kroupa, P. (October 4–7, 2004). "The Fundamental Building Blocks of Galaxies". In C. Turon, K.S. O'Flaherty, M.A.C. Perryman. Proceedings of the Gaia Symposium "The Three-Dimensional Universe with Gaia (ESA SP-576). Observatoire de Paris-Meudon. p. 629. arXiv:astro-ph/0412069.
- Elmegreen, Bruce G.; Efremov, Yuri N. (1997). "A Universal Formation Mechanism for Open and Globular Clusters in Turbulent Gas". The Astrophysical Journal 480 (1): 235–245. Bibcode:1997ApJ...480..235E. doi:10.1086/303966.
- Eggen, O. J. (1960). "Stellar groups, VII. The structure of the Hyades group". Monthly Notices of the Royal Astronomical Society 120: 540–562. Bibcode:1960MNRAS.120..540E.
- Subramaniam, A.; Gorti, U.; Sagar, R.; Bhatt, H. C. (1995). "Probable binary open star clusters in the Galaxy". Astronomy and Astrophysics 302: 86–89. Bibcode:1995A&A...302...86S.
- Nilakshi, S.R.; Pandey, A.K.; Mohan, V. (2002). "A study of spatial structure of galactic open star clusters". Astronomy and Astrophysics 383 (1): 153–162. Bibcode:2002A&A...383..153N. doi:10.1051/0004-6361:20011719.
- Trumpler, R.J. (1930). "Preliminary results on the distances, dimensions and space distribution of open star clusters". Lick Observatory bulletin (Berkeley: University of California Press) 14 (420): 154–188. Bibcode:1930LicOB..14..154T.
- Dias, W.S.; Alessi, B.S.; Moitinho, A.; Lépine, J.R.D. (2002). "New catalogue of optically visible open clusters and candidates". Astronomy and Astrophysics 389 (3): 871–873. arXiv:astro-ph/0203351. Bibcode:2002A&A...389..871D. doi:10.1051/0004-6361:20020668.
- Janes, K.A.; Phelps, R.L. (1980). "The galactic system of old star clusters: The development of the galactic disk". The Astronomical Journal 108: 1773–1785. Bibcode:1994AJ....108.1773J. doi:10.1086/117192.
- Hunter, D. (1997). "Star Formation in Irregular Galaxies: A Review of Several Key Questions". Publications of the Astronomical Society of the Pacific 109: 937–950. Bibcode:1997PASP..109..937H. doi:10.1086/133965.
- Binney; Merrifield, M. (1998). Galactic Astronomy. Princeton: Princeton University Press. ISBN 978-0-691-02565-0. OCLC 39108765. Unknown parameter
- Friel, Eileen D. (1995). "The Old Open Clusters Of The Milky Way". Annual Reviews of Astronomy & Astrophysics 33: 381–414. Bibcode:1995ARA&A..33..381F. doi:10.1146/annurev.aa.33.090195.002121.
- van den Bergh, S.; McClure, R.D. (1980). "Galactic distribution of the oldest open clusters". Astronomy & Astrophysics 88: 360. Bibcode:1980A&A....88..360V.
- Andronov, N.; Pinsonneault, M.; Terndrup, D. (2003). "Formation of Blue Stragglers in Open Clusters". Bulletin of the American Astronomical Society 35: 1343. Bibcode:2003AAS...203.8504A.
- Fellhauer, M.; Lin, D.N.C.; Bolte, M. Aarseth, S.J.; Williams K.A. (2003). "The White Dwarf Deficit in Open Clusters: Dynamical Processes". The Astrophysical Journal 595 (1): L53–L56. arXiv:astro-ph/0308261. Bibcode:2003ApJ...595L..53F. doi:10.1086/379005.
- Thies, Ingo; Kroupa, Pavel; Goodwin, Simon P.; Stamatellos, Dimitrios; Whitworth, Anthony P. (July 2010), "Tidally Induced Brown Dwarf and Planet Formation in Circumstellar Disks", The Astrophysical Journal 717 (1): 577–585, arXiv:1005.3017, Bibcode:2010ApJ...717..577T, doi:10.1088/0004-637X/717/1/577
- Hills, J. G. (February 1, 1980). "The effect of mass loss on the dynamical evolution of a stellar system - Analytic approximations". Astrophysical Journal 235 (1): 986–991. Bibcode:1980ApJ...235..986H. doi:10.1086/157703.
- de La Fuente, M.R. (1998). "Dynamical Evolution of Open Star Clusters". Publications of the Astronomical Society of the Pacific 110 (751): 1117–1117. Bibcode:1998PASP..110.1117D. doi:10.1086/316220.
- Soderblom, David R.; Mayor, Michel (1993). "Stellar kinematic groups. I - The Ursa Major group". Astronomical Journal 105 (1): 226–249. Bibcode:1993AJ....105..226S. doi:10.1086/116422. ISSN 0004-6256.
- Majewski, S. R.; Hawley, S. L.; Munn, J. A. (1996). "Moving Groups, Stellar Streams and Phase Space Substructure in the Galactic Halo". ASP Conference Series 92: 119. Bibcode:1996ASPC...92..119M.
- Sick, Jonathan; de Jong, R. S. (2006). "A New Method for Detecting Stellar Streams in the Halos of Galaxies". Bulletin of the American Astronomical Society 38: 1191. Bibcode:2006AAS...20921105S.
- "Diagrammi degli ammassi ed evoluzione stellare" (in italian). O.R.S.A. - Organizzazione Ricerche e Studi di Astronomia. Retrieved 2009-01-06.
- VandenBerg, D.A.; Stetson, P.B. (2004). "On the Old Open Clusters M67 and NGC 188: Convective Core Overshooting, Color-Temperature Relations, Distances, and Ages". Publications of the Astronomical Society of the Pacific 116 (825): 997–1011. Bibcode:2004PASP..116..997V. doi:10.1086/426340.
- Keel, Bill. "The Extragalactic Distance Scale". Department of Physics and Astronomy - University of Alabama. Retrieved 2009-01-09.
- Brown, A.G.A. (2001). "Open clusters and OB associations: a review". Revista Mexicana de Astronomía y Astrofísica 11: 89–96. Bibcode:2001RMxAC..11...89B.
- Percival, S. M.; Salaris, M.; Kilkenny, D. (2003). "The open cluster distance scale - A new empirical approach". Astronomy & Astrophysics 400 (2): 541–552. arXiv:astro-ph/0301219. Bibcode:2003A&A...400..541P. doi:10.1051/0004-6361:20030092.
- Hanson, R.B. (1975). "A study of the motion, membership, and distance of the Hyades cluster". Astronomical Journal 80: 379–401. Bibcode:1975AJ.....80..379H. doi:10.1086/111753.
- Bragaglia, A.; Held, E.V.; Tosi M. (2005). "Radial velocities and membership of stars in the old, distant open cluster Berkeley 29". Astronomy and Astrophysics 429 (3): 881–886. arXiv:astro-ph/0409046. Bibcode:2005A&A...429..881B. doi:10.1051/0004-6361:20041049.
- Rowan-Robinson, Michael (March 1988). "The extragalactic distance scale". Space Science Reviews 48 (1–2): 1–71. Bibcode:1988SSRv...48....1R. doi:10.1007/BF00183129. ISSN 0038-6308.
Further reading
- Kaufmann, W. J. (1994). Universe. W H Freeman. ISBN 0-7167-2379-4.
- Smith, E.V.P.; Jacobs, K.C.; Zeilik, M.; Gregory, S.A. (1997). Introductory Astronomy and Astrophysics. Thomson Learning. ISBN 0-03-006228-4.
- The Jewel Box (also known as NGC 4755 or Kappa Crucis Cluster) - open cluster in the Crux constellation @ SKY-MAP.ORG
- Open Star Clusters @ SEDS Messier pages
- A general overview of open clusters
- Open and globular clusters overview
- The moving cluster method
- Open Clusters - Information and amateur observations
- Clickable table of Messier objects including open clusters | http://en.wikipedia.org/wiki/Open_cluster | 13 |
92 | A propeller is essentially a type of fan which transmits power by converting rotational motion into thrust for propulsion of a vehicle such as an aircraft, ship, or submarine through a mass such as water or air, by rotating two or more twisted blades about a central shaft, in a manner analogous to rotating a screw through a solid. The blades of a propeller act as rotating wings (the blades of a propeller are in fact wings or airfoils), and produce force through application of both Bernoulli's principle and Newton's third law, generating a difference in pressure between the forward and rear surfaces of the airfoil-shaped blades and by accelerating a mass of air rearward.
In sculling, a single blade is moved through an arc, from side to side taking care to keep presenting the blade to the water at the effective angle. The innovation introduced with the screw propeller was the extension of that arc through more than 360° by attaching the blade to a rotating shaft. In practice, there is nearly always more than one blade so as to balance the forces involved. The exception is a single-blade propeller system.
The origin of the actual screw propeller starts with Archimedes, who used a screw to lift water for irrigation and bailing boats, so famously that it became known as the Archimedes screw. It was probably an application of spiral movement in space (spirals were a special study of Archimedes) to a hollow segmented water-wheel used for irrigation by Egyptians for centuries. Leonardo da Vinci adopted the principle to drive his theoretical helicopter, sketches of which involved a large canvas screw overhead.
In 1784, J. P. Paucton proposed a gyrocopter-like aircraft using similar screws for both lift and propulsion. At about the same time, James Watt proposed using screws to propel boats, although he did not use them for his steam engines. This was not his own invention, though; Toogood and Hays had patented it a century earlier, and it had become an uncommon use as a means of propelling boats since that time.
By 1827 Josef Ressel had invented a screw propeller which had multiple blades fastened around a conical base; this new method of propulsion allowed steam ships to travel at much greater speeds without using sails thereby making ocean travel faster. Propellers remained extremely inefficient and little-utilized until 1835, when Francis Pettit Smith discovered a new way of building propellers. Up to that time, propellers were literally screws, of considerable length. But during the testing of a boat propelled by one, the screw snapped off, leaving a fragment shaped much like a modern boat propeller. The boat moved faster with the broken propeller.
At about the same time, Frédéric Sauvage and John Ericsson applied for patents on vaguely similar, although less efficient shortened-screw propellers, leading to an apparently permanent controversy as to who is the official inventor among those three men. Ericsson became widely famous when he built the “Monitor”, an armoured battleship that in 1862 triumphed over the Confederate States’ Merrimac in an American Civil War sea battle.
The first screw propeller to be powered by a gasoline engine, fitted to a small boat (now known as a powerboat) was installed by Frederick Lanchester, also from Birmingham. This was tested in Oxford. The first 'real-world' use of a propeller was by David Bushnell, who used hand-powered screw propellers to navigate his submarine "Turtle" in 1776.
The twisted airfoil (aerofoil) shape of modern aircraft propellers was pioneered by the Wright brothers when they found that all existing knowledge on propellers (mostly naval) was determined by trial and error and that no one knew exactly how they worked. They found that a propeller is essentially the same as a wing and so were able to use data collated from their earlier wind tunnel experiments on wings. They also found that the relative angle of attack from the forward movement of the aircraft was different for all points along the length of the blade, thus it was necessary to introduce a twist along its length. Their original propeller blades are only about 5% less efficient than the modern equivalent, some 100 years later.
Alberto Santos Dumont was another early pioneer, having designed propellers before the Wright Brothers (albeit not as efficient) for his airships. He applied the knowledge he gained from experiences with airships to make a propeller with a steel shaft and aluminium blades for his 14 bis biplane. Some of his designs used a bent aluminium sheet for blades, thus creating an airfoil shape. These are heavily undercambered because of this and combined with the lack of a lengthwise twist made them less efficient than the Wright propellers. Even so, this was perhaps the first use of aluminium in the construction of an airscrew.
Propellers are similar in aerofoil section to a low drag wing and as such are poor in operation when at other than their optimum angle of attack. Control systems are required to counter the need for accurate matching of pitch to flight speed and engine speed.
The purpose of varying pitch angle with a variable pitch propeller is to maintain an optimal angle of attack (maximum lift to drag ratio) on the propeller blades as aircraft speed varies. Early pitch control settings were pilot operated, either two-position or manually variable. Later, automatic propellers were developed to maintain an optimum angle of attack. They did this by balancing the centripetal twisting moment on the blades and a set of counterweights against a spring and the aerodynamic forces on the blade. Automatic props had the advantage of being simple and requiring no external control, but a particular propeller's performance was difficult to match with that of the aircraft's powerplant. An improvement on the automatic type was the constant-speed propeller. Constant speed propellers allow the pilot to select a rotational speed for maximum engine power or maximum efficiency, and a propeller governor acts as a closed-loop controller to vary propeller pitch angle as required to maintain the RPM commanded by the pilot. In most aircraft this system is hydraulic, with engine oil serving as the hydraulic fluid. However, electrically controlled propellers were developed during World War II and saw extensive use on military aircraft.
On some variable-pitch propellers, the blades can be rotated parallel to the airflow to reduce drag and increase gliding distance in case of an engine failure. This is called feathering. Feathering propellers were developed for military fighter aircraft prior to World War II, as a fighter is more likely to experience an engine failure due to the inherent danger of combat. Feathering propellers are used on multi-engine aircraft and are meant to reduce drag on a failed engine. When used on powered gliders and single-engine turbine powered aircraft they increase the gliding distance. Most feathering systems for reciprocating engines sense a drop in oil pressure and move the blades toward the feather position, and require the pilot to pull the prop control back to disengage the high-pitch stop pins before the engine reaches idle RPM. Turbopropeller control systems usually utilize a negative torque sensor in the reduction gearbox which moves the blades toward feather when the engine is no longer providing power to the propeller. Depending on design, the pilot may have to push a button to override the high-pitch stops and complete the feathering process, or the feathering process may be totally automatic.
In some aircraft (e.g., the C-130 Hercules), the pilot can manually override the constant speed mechanism to reverse the blade pitch angle, and thus the thrust of the engine. This is used to help slow the plane down after landing in order to save wear on the brakes and tires, but in some cases also allows the aircraft to back up on its own.
A further consideration is the number and the shape of the blades used. Increasing the aspect ratio of the blades reduces drag but the amount of thrust produced depends on blade area, so using high aspect blades can lead to the need for a propeller diameter which is unusable. A further balance is that using a smaller number of blades reduces interference effects between the blades, but to have sufficient blade area to transmit the available power within a set diameter means a compromise is needed. Increasing the number of blades also decreases the amount of work each blade is required to perform, limiting the local Mach number - a significant performance limit on propellers.
Contra-rotating propellers use a second propeller rotating in the opposite direction immediately 'downstream' of the main propeller so as to recover energy lost in the swirling motion of the air in the propeller slipstream. Contra-rotation also increases power without increasing propeller diameter and provides a counter to the torque effect of high-power piston engine as well as the gyroscopic precession effects, and of the slipstream swirl. However on small aircraft the added cost, complexity, weight and noise of the system rarely make it worthwhile.
The propeller is usually attached to the crankshaft of the engine, either directly or through a gearbox. Light aircraft sometimes forego the weight, complexity and cost of gearing but on some larger aircraft and some turboprop aircraft it is essential.
A propeller's performance suffers as the blade speed exceeds the speed of sound. As the relative air speed at the blade is rotation speed plus axial speed, a propeller blade tip will reach sonic speed sometime before the rest of the aircraft (with a theoretical blade the maximum aircraft speed is about 845 km/h (Mach 0.7) at sea-level, in reality it is rather lower). When a blade tip becomes supersonic, drag and torque resistance increase suddenly and shock waves form creating a sharp increase in noise. Aircraft with conventional propellers, therefore, do not usually fly faster than Mach 0.6. There are certain propeller-driven aircraft, usually military, which do operate at Mach 0.8 or higher, although there is considerable fall off in efficiency.
There have been efforts to develop propellers for aircraft at high subsonic speeds. The 'fix' is similar to that of transonic wing design. The maximum relative velocity is kept as low as possible by careful control of pitch to allow the blades to have large helix angles; thin blade sections are used and the blades are swept back in a scimitar shape (Scimitar propeller); a large number of blades are used to reduce work per blade and so circulation strength; contra-rotation is used. The propellers designed are more efficient than turbo-fans and their cruising speed (Mach 0.7–0.85) is suitable for airliners, but the noise generated is tremendous (see the Antonov An-70 and Tupolev Tu-95 for examples of such a design).
See also Airscrew wind generator.
Most propellers have their axis of rotation parallel to the fluid flow. There have however been some attempts to power vehicles with the same principles behind vertical axis wind turbines, where the rotation is perpendicular to fluid flow. Most attempts have been unsuccessful Blades that can vary their angle of attack during rotation have aerodynamics similar to flapping flight Flapping flight is still poorly understood and almost never seriously used in engineering because of the strong coupling of lift, thrust and control forces.
The Voith-Schneider propeller pictured below is another successful example, operating in water.
1) Trailing edge|
3) Fillet area
4) Hub or Boss
5) Hub or Bos Cap
6) Leading edge|
8) Propeller shaft
9) Stern tube bearing
10) Stern tube
A propeller is the most common propulsor on ships, imparting momentum to a fluid which causes a force to act on the ship.
The ideal efficiency of any size propeller is that of an actuator disc in an ideal fluid. An actual marine propeller is made up of sections of helicoidal surfaces which act together 'screwing' through the water (hence the common reference to marine propellers as "screws"). Three, four, or five blades are most common in marine propellers, although designs which are intended to operate at reduced noise will have more blades. The blades are attached to a boss (hub), which should be as small as the needs of strength allow - with fixed pitch propellers the blades and boss are usually a single casting.
An alternative design is the controllable pitch propeller (CPP), where the blades are rotated normal to the drive shaft by additional machinery - usually hydraulics - at the hub and control linkages running down the shaft. This allows the drive machinery to operate at a constant speed while the propeller loading is changed to match operating conditions. It also eliminates the need for a reversing gear and allows for more rapid change to thrust, as the revolutions are constant. This type of propeller is most common on ships such as tugs where there can be enormous differences in propeller loading when towing compared to running free, a change which could cause conventional propellers to lock up as insufficient torque is generated. The downside of a CPP is the large hub which increases the chance of cavitation and the mechanical complexity which limits transmission power.
For smaller motors there are self-pitching propellers. The blades freely move through an entire circle on an axis at right angles to the shaft. This allows hydrodynamic and centrifugal forces to 'set' the angle the blades reach and so the pitch of the propeller.
A propeller that turns clockwise to produce forward thrust, when viewed from aft, is called right-handed. One that turns anticlockwise is said to be left-handed. Larger vessels often have twin screws to reduce heeling torque, counter-rotating propellers, the starboard screw is usually right-handed and the port left-handed, this is called outward turning. The opposite case is called inward turning. Another possibility is contra-rotating propellers, where two propellers rotate in opposing directions on a single shaft.
The blade outline is defined either by a projection on a plane normal to the propeller shaft (projected outline) or by setting the circumferential chord across the blade at a given radius against radius (developed outline). The outline is usually symmetrical about a given radial line termed the median. If the median is curved back relative to the direction of rotation the propeller is said to have skew back. The skew is expressed in terms of circumferential displacement at the blade tips. If the blade face in profile is not normal to the axis it is termed raked, expressed as a percentage of total diameter.
Each blade's pitch and thickness varies with radius, early blades had a flat face and an arced back (sometimes called a circular back as the arc was part of a circle), modern propeller blades have aerofoil sections. The camber line is the line through the mid-thickness of a single blade. The camber is the maximum difference between the camber line and the chord joining the trailing and leading edges. The camber is expressed as a percentage of the chord.
The radius of maximum thickness is usually forward of the mid-chord point with the blades thinning to a minimum at the tips. The thickness is set by the demands of strength and the ratio of thickness to total diameter is called blade thickness fraction.
The ratio of pitch to diameter is called pitch ratio. Due to the complexities of modern propellers a nominal pitch is given, usually a radius of 70% of the total is used.
Blade area is given as a ratio of the total area of the propeller disc, either as developed blade area ratio or projected blade area ratio.
Mechanical ship propulsion began with the steam ship. The first successful ship of this type is a matter of debate; candidate inventors of the 18th century include William Symington, the Marquis de Jouffroy, John Fitch and Robert Fulton, however William Symington's ship the Charlotte Dundas is regarded as the world's "first practical steamboat". Paddlewheels as the main motive source became standard on these early vessels (see Paddle steamer). Robert Fulton had tested, and rejected, the screw propeller.
The screw (as opposed to paddlewheels) was introduced in the latter half of the 18th century. David Bushnell's invention of the submarine (Turtle) in 1775 used hand-powered screws for vertical and horizontal propulsion. The Bohemian engineer Josef Ressel designed and patented the first practicable screw propeller in 1827. Francis Pettit Smith tested a similar one in 1836. In 1839, John Ericsson introduced the screw propeller design onto a ship which then sailed over the Atlantic Ocean in 40 days. Mixed paddle and propeller designs were still being used at this time (vide the 1858 SS Great Eastern).
In 1848 the British Admiralty held a tug of war contest between a propeller driven ship, Rattler, and a paddle wheel ship, Alecto. Rattler won, towing Alecto astern at 2.8 knots (5 km/h), but it was not until the early 20th century paddle propelled vessels were entirely superseded. The screw propeller replaced the paddles owing to its greater efficiency, compactness, less complex power transmission system, and reduced susceptibility to damage (especially in battle)
Initial designs owed much to the ordinary screw from which their name derived - early propellers consisted of only two blades and matched in profile the length of a single screw rotation. This design was common, but inventors endlessly experimented with different profiles and greater numbers of blades. The propeller screw design stabilized by the 1880s.
In the early days of steam power for ships, when both paddle wheels and screws were in use, ships were often characterized by their type of propellers, leading to terms like screw steamer or screw sloop.
Propellers are referred to as "lift" devices, while paddles are "drag" devices.
Cavitation can occur if an attempt is made to transmit too much power through the screw. At high rotating speeds or under heavy load (high blade lift coefficient), the pressure on the inlet side of the blade can drop below the vapour pressure of the water, resulting in the formation of a pocket of vapour, which can no longer effectively transfer force to the water (stretching the analogy to a screw, you might say the water thread 'strips'). This effect wastes energy, makes the propeller "noisy" as the vapour bubbles collapse, and most seriously, erodes the screw's surface due to localized shock waves against the blade surface. Cavitation can, however, be used as an advantage in design of very high performance propellers, in form of the supercavitating propeller. (See also fluid dynamics). A similar, but quite separate issue, is ventilation, which occurs when a propeller operating near the surface draws air into the blades, causing a similar loss of power and shaft vibration, but without the related potential blade surface damage caused by cavitation. Both effects can be mitigated by increasing the submerged depth of the propeller: cavitation is reduced because the hydrostatic pressure increases the margin to the vapor pressure, and ventilation because it is further from surface waves and other air pockets that might be drawn into the slipstream.
The force has two parts - that normal to the direction of flow is lift (L) and that in the direction of flow is drag (D). Both are expressed non-dimensionally as:
Each coefficient is a function of the angle of attack and Reynolds' number. As the angle of attack increases lift rises rapidly from the no lift angle before slowing its increase and then decreasing, with a sharp drop as the stall angle is reached and flow is disrupted. Drag rises slowly at first and as the rate of increase in lift falls and the angle of attack increases drag increases more sharply.
For a given strength of circulation (), . The effect of the flow over and the circulation around the aerofoil is to reduce the velocity over the face and increase it over the back of the blade. If the reduction in pressure is too much in relation to the ambient pressure of the fluid, cavitation occurs, bubbles form in the low pressure area and are moved towards the blade's trailing edge where they collapse as the pressure increases, this reduces propeller efficiency and increases noise. The forces generated by the bubble collapse can cause permanent damage to the surfaces of the blade.
where J is the advance coefficient () and p is the pitch ratio (P/D).
The forces of lift and drag on the blade, dA, where force normal to the surface is dL:
These forces contribute to thrust, T, on the blade:
From this total thrust can be obtained by integrating this expression along the blade. The transverse force is found in a similar manner:
Substituting for and multiplying by r, gives torque as:
which can be integrated as before.
The total thrust power of the propeller is proportional to and the shaft power to . So efficiency is . The blade efficiency is in the ratio between thrust and torque:
showing that the blade efficiency is determined by its momentum and its qualities in the form of angles , where is the ratio of the drag and lift coefficients.
This analysis is simplified and ignores a number of significant factors including interference between the blades and the influence of tip vortices.
where is a function of the advance coefficient, is a function of the Reynolds' number, and is a function of the Froude number. Both and are likely to be small in comparison to under normal operating conditions, so the expression can be reduced to:
For two identical propellers the expression for both will be the same. So with the propellers , and using the same subscripts to indicate each propeller:
For both Froude number and advance coefficient:
where is the ratio of the linear dimensions.
Thrust and velocity, at the same Froude number, give thrust power:
The overall propulsive efficiency (an extension of effective power ()) is developed from the propulsive coefficient (PC), which is derived from the installed shaft power () modified by the effective power for the hull with appendages (), the propeller's thrust power (), and the relative rotative efficiency.
/ = hull efficiency =
/ = propeller efficiency =
/ = relative rotative efficiency =
/ = shaft transmission efficiency
Producing the following:
The terms contained within the brackets are commonly grouped as the quasi-propulsive coefficient (QPC, ). The QPC is produced from small-scale experiments and is modified with a load factor for full size ships.
Wake is the interaction between the ship and the water with its own velocity relative to the ship. The wake has three parts - the velocity of the water around the hull; the boundary layer between the water dragged by the hull and the surrounding flow; and the waves created by the movement of the ship. the first two parts will reduce the velocity of water into the propeller, the third will either increase or decrease the velocity depending on whether the waves create a crest or trough at the propeller.
At present, one of the newest and best type of propeller is the controllable pitch propeller. This propeller has several advantages with ships. These advantages include: the least drag depending on the speed used, the ability to move the sea vessel backwards, and the ability to use the "vane"-stance, which gives the least water resistance when not using the propeller (eg when the sails are used instead).
See Also: Astern propulsion. | http://www.reference.com/browse/handed+one | 13 |
50 | In computer security and programming, a buffer overflow, or buffer overrun, is an anomaly where a program, while writing data to a buffer, overruns the buffer's boundary and overwrites adjacent memory. This is a special case of violation of memory safety.
Buffer overflows can be triggered by inputs that are designed to execute code, or alter the way the program operates. This may result in erratic program behavior, including memory access errors, incorrect results, a crash, or a breach of system security. Thus, they are the basis of many software vulnerabilities and can be maliciously exploited.
Programming languages commonly associated with buffer overflows include C and C++, which provide no built-in protection against accessing or overwriting data in any part of memory and do not automatically check that data written to an array (the built-in buffer type) is within the boundaries of that array. Bounds checking can prevent buffer overflows.
Technical description
A buffer overflow occurs when data written to a buffer also corrupts data values in memory addresses adjacent to the destination buffer due to insufficient bounds checking. This can occur when copying data from one buffer to another without first checking that the data fits within the destination buffer.
In the following example, a program has two data items which are adjacent in memory: an 8-byte-long string buffer, A, and a two-byte big-endian integer, B.
char A; unsigned short B;
Initially, A contains nothing but zero bytes, and B contains the number 1979.
"excessive" is 9 characters long and encodes to 10 bytes including the terminator, but A can take only 8 bytes. By failing to check the length of the string, it also overwrites the value of B:
B's value has now been inadverently replaced by a number formed from part of the character string. In this example "e" followed by a zero byte would become 25856.
Writing data past the end of allocated memory can sometimes be detected by the operating system to generate an segmentation fault error that terminates the process.
The techniques to exploit a buffer overflow vulnerability vary per architecture, operating system and memory region. For example, exploitation on the heap (used for dynamically allocated memory), is very different from exploitation on the call stack.
Stack-based exploitation
A technically inclined user may exploit stack-based buffer overflows to manipulate the program to their advantage in one of several ways:
- By overwriting a local variable that is near the buffer in memory on the stack to change the behavior of the program which may benefit the attacker.
- By overwriting the return address in a stack frame. Once the function returns, execution will resume at the return address as specified by the attacker, usually a user input filled buffer.
- By overwriting a function pointer, or exception handler, which is subsequently executed.
With a method called "trampolining", if the address of the user-supplied data is unknown, but the location is stored in a register, then the return address can be overwritten with the address of an opcode which will cause execution to jump to the user supplied data. If the location is stored in a register R, then a jump to the location containing the opcode for a jump R, call R or similar instruction, will cause execution of user supplied data. The locations of suitable opcodes, or bytes in memory, can be found in DLLs or the executable itself. However the address of the opcode typically cannot contain any null characters and the locations of these opcodes can vary between applications and versions of the operating system. The Metasploit Project is one such database of suitable opcodes, though only those found in the Windows operating system are listed.
Stack-based buffer overflows are not to be confused with stack overflows.
Heap-based exploitation
A buffer overflow occurring in the heap data area is referred to as a heap overflow and is exploitable in a manner different from that of stack-based overflows. Memory on the heap is dynamically allocated by the application at run-time and typically contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage (such as malloc meta data) and uses the resulting pointer exchange to overwrite a program function pointer.
Barriers to exploitation
Manipulation of the buffer, which occurs before it is read or executed, may lead to the failure of an exploitation attempt. These manipulations can mitigate the threat of exploitation, but may not make it impossible. Manipulations could include conversion to upper or lower case, removal of metacharacters and filtering out of non-alphanumeric strings. However, techniques exist to bypass these filters and manipulations; alphanumeric code, polymorphic code, self-modifying code and return-to-libc attacks. The same methods can be used to avoid detection by intrusion detection systems. In some cases, including where code is converted into unicode, the threat of the vulnerability have been misrepresented by the disclosers as only Denial of Service when in fact the remote execution of arbitrary code is possible.
Practicalities of exploitation
In real-world exploits there are a variety of challenges which need to be overcome for exploits to operate reliably. These factors include null bytes in addresses, variability in the location of shellcode, differences between environments and various counter-measures in operation.
NOP sled technique
A NOP-sled is the oldest and most widely known technique for successfully exploiting a stack buffer overflow. It solves the problem of finding the exact address of the buffer by effectively increasing the size of the target area. To do this, much larger sections of the stack are corrupted with the no-op machine instruction. At the end of the attacker-supplied data, after the no-op instructions, the attacker places an instruction to perform a relative jump to the top of the buffer where the shellcode is located. This collection of no-ops is referred to as the "NOP-sled" because if the return address is overwritten with any address within the no-op region of the buffer it will "slide" down the no-ops until it is redirected to the actual malicious code by the jump at the end. This technique requires the attacker to guess where on the stack the NOP-sled is instead of the comparatively small shellcode.
Because of the popularity of this technique, many vendors of intrusion prevention systems will search for this pattern of no-op machine instructions in an attempt to detect shellcode in use. It is important to note that a NOP-sled does not necessarily contain only traditional no-op machine instructions; any instruction that does not corrupt the machine state to a point where the shellcode will not run can be used in place of the hardware assisted no-op. As a result it has become common practice for exploit writers to compose the no-op sled with randomly chosen instructions which will have no real effect on the shellcode execution.
While this method greatly improves the chances that an attack will be successful, it is not without problems. Exploits using this technique still must rely on some amount of luck that they will guess offsets on the stack that are within the NOP-sled region. An incorrect guess will usually result in the target program crashing and could alert the system administrator to the attacker's activities. Another problem is that the NOP-sled requires a much larger amount of memory in which to hold a NOP-sled large enough to be of any use. This can be a problem when the allocated size of the affected buffer is too small and the current depth of the stack is shallow (i.e. there is not much space from the end of the current stack frame to the start of the stack). Despite its problems, the NOP-sled is often the only method that will work for a given platform, environment, or situation; as such it is still an important technique.
The jump to address stored in a register technique
The "jump to register" technique allows for reliable exploitation of stack buffer overflows without the need for extra room for a NOP-sled and without having to guess stack offsets. The strategy is to overwrite the return pointer with something that will cause the program to jump to a known pointer stored within a register which points to the controlled buffer and thus the shellcode. For example, if register A contains a pointer to the start of a buffer then any jump or call taking that register as an operand can be used to gain control of the flow of execution.
In practice a program may not intentionally contain instructions to jump to a particular register. The traditional solution is to find an unintentional instance of a suitable opcode at a fixed location somewhere within the program memory. In figure E on the left you can see an example of such an unintentional instance of the i386
jmp esp instruction. The opcode for this instruction is
FF E4. This two byte sequence can be found at a one byte offset from the start of the instruction
call DbgPrint at address
0x7C941EED. If an attacker overwrites the program return address with this address the program will first jump to
0x7C941EED, interpret the opcode
FF E4 as the
jmp esp instruction, and will then jump to the top of the stack and execute the attacker's code.
When this technique is possible the severity of the vulnerability increases considerably. This is because exploitation will work reliably enough to automate an attack with a virtual guarantee of success when it is run. For this reason, this is the technique most commonly used in Internet worms that exploit stack buffer overflow vulnerabilities.
This method also allows shellcode to be placed after the overwritten return address on the Windows platform. Since executables are mostly based at address
0x00400000 and x86 is a Little Endian architecture, the last byte of the return address must be a null, which terminates the buffer copy and nothing is written beyond that. This limits the size of the shellcode to the size of the buffer, which may be overly restrictive. DLLs are located in high memory (above
0x01000000) and so have addresses containing no null bytes, so this method can remove null bytes (or other disallowed characters) from the overwritten return address. Used in this way, the method is often referred to as "DLL Trampolining".
Protective countermeasures
Various techniques have been used to detect or prevent buffer overflows, with various tradeoffs. The most reliable way to avoid or prevent buffer overflows is to use automatic protection at the language level. This sort of protection, however, cannot be applied to legacy code, and often technical, business, or cultural constraints call for a vulnerable language. The following sections describe the choices and implementations available.
Choice of programming language
The choice of programming language can have a profound effect on the occurrence of buffer overflows. As of 2008[update], among the most popular languages are C and its derivative, C++, with a vast body of software having been written in these languages. C and C++ provide no built-in protection against accessing or overwriting data in any part of memory; more specifically, they do not check that data written to a buffer is within the boundaries of that buffer. However, the standard C++ libraries provide many ways of safely buffering data, and techniques to avoid buffer overflows also exist for C.
Many other programming languages provide runtime checking and in some cases even compile-time checking which might send a warning or raise an exception when C or C++ would overwrite data and continue to execute further instructions until erroneous results are obtained which might or might not cause the program to crash. Examples of such languages include Ada, Eiffel, Lisp, Modula-2, Smalltalk, OCaml and such C-derivatives as Cyclone and D. The Java and .NET Framework bytecode environments also require bounds checking on all arrays. Nearly every interpreted language will protect against buffer overflows, signalling a well-defined error condition. Often where a language provides enough type information to do bounds checking an option is provided to enable or disable it. Static code analysis can remove many dynamic bound and type checks, but poor implementations and awkward cases can significantly decrease performance. Software engineers must carefully consider the tradeoffs of safety versus performance costs when deciding which language and compiler setting to use.
Use of safe libraries
The problem of buffer overflows is common in the C and C++ languages because they expose low level representational details of buffers as containers for data types. Buffer overflows must thus be avoided by maintaining a high degree of correctness in code which performs buffer management. It has also long been recommended to avoid standard library functions which are not bounds checked, such as
strcpy. The Morris worm exploited a
gets call in fingerd.
Well-written and tested abstract data type libraries which centralize and automatically perform buffer management, including bounds checking, can reduce the occurrence and impact of buffer overflows. The two main building-block data types in these languages in which buffer overflows commonly occur are strings and arrays; thus, libraries preventing buffer overflows in these data types can provide the vast majority of the necessary coverage. Still, failure to use these safe libraries correctly can result in buffer overflows and other vulnerabilities; and naturally, any bug in the library itself is a potential vulnerability. "Safe" library implementations include "The Better String Library", Vstr and Erwin. The OpenBSD operating system's C library provides the strlcpy and strlcat functions, but these are more limited than full safe library implementations.
In September 2007, Technical Report 24731, prepared by the C standards committee, was published; it specifies a set of functions which are based on the standard C library's string and I/O functions, with additional buffer-size parameters. However, the efficacy of these functions for the purpose of reducing buffer overflows is disputable; it requires programmer intervention on a per function call basis that is equivalent to intervention that could make the analogous older standard library functions buffer overflow safe.
Buffer overflow protection
Buffer overflow protection is used to detect the most common buffer overflows by checking that the stack has not been altered when a function returns. If it has been altered, the program exits with a segmentation fault. Three such systems are Libsafe, and the StackGuard and ProPolice gcc patches.
Stronger stack protection is possible by splitting the stack in two: one for data and one for function returns. This split is present in the Forth language, though it was not a security-based design decision. Regardless, this is not a complete solution to buffer overflows, as sensitive data other than the return address may still be overwritten.
Pointer protection
Buffer overflows work by manipulating pointers (including stored addresses). PointGuard was proposed as a compiler-extension to prevent attackers from being able to reliably manipulate pointers and addresses. The approach works by having the compiler add code to automatically XOR-encode pointers before and after they are used. Because the attacker (theoretically) does not know what value will be used to encode/decode the pointer, he cannot predict what it will point to if he overwrites it with a new value. PointGuard was never released, but Microsoft implemented a similar approach beginning in Windows XP SP2 and Windows Server 2003 SP1. Rather than implement pointer protection as an automatic feature, Microsoft added an API routine that can be called at the discretion of the programmer. This allows for better performance (because it is not used all of the time), but places the burden on the programmer to know when it is necessary.
Because XOR is linear, an attacker may be able to manipulate an encoded pointer by overwriting only the lower bytes of an address. This can allow an attack to succeed if the attacker is able to attempt the exploit multiple times and/or is able to complete an attack by causing a pointer to point to one of several locations (such as any location within a NOP sled). Microsoft added a random rotation to their encoding scheme to address this weakness to partial overwrites.
Executable space protection
Executable space protection is an approach to buffer overflow protection which prevents execution of code on the stack or the heap. An attacker may use buffer overflows to insert arbitrary code into the memory of a program, but with executable space protection, any attempt to execute that code will cause an exception.
Some CPUs support a feature called NX ("No eXecute") or XD ("eXecute Disabled") bit, which in conjunction with software, can be used to mark pages of data (such as those containing the stack and the heap) as readable and writeable but not executable.
Executable space protection does not generally protect against return-to-libc attacks, or any other attack which does not rely on the execution of the attackers code. However, on 64-bit systems using ASLR, as described below, executable space protection makes it far more difficult to execute such attacks.
Address space layout randomization
Address space layout randomization (ASLR) is a computer security feature which involves arranging the positions of key data areas, usually including the base of the executable and position of libraries, heap, and stack, randomly in a process' address space.
Randomization of the virtual memory addresses at which functions and variables can be found can make exploitation of a buffer overflow more difficult, but not impossible. It also forces the attacker to tailor the exploitation attempt to the individual system, which foils the attempts of internet worms. A similar but less effective method is to rebase processes and libraries in the virtual address space.
Deep packet inspection
The use of deep packet inspection (DPI) can detect, at the network perimeter, very basic remote attempts to exploit buffer overflows by use of attack signatures and heuristics. These are able to block packets which have the signature of a known attack, or if a long series of No-Operation instructions (known as a nop-sled) is detected, these were once used when the location of the exploit's payload is slightly variable.
Packet scanning is not an effective method since it can only prevent known attacks and there are many ways that a 'nop-sled' can be encoded. Shellcode used by attackers can be made alphanumeric, metamorphic, or self-modifying to evade detection by heuristic packet scanners and intrusion detection systems.
Buffer overflows were understood and partially publicly documented as early as 1972, when the Computer Security Technology Planning Study laid out the technique: "The code performing this function does not check the source and destination addresses properly, permitting portions of the monitor to be overlaid by the user. This can be used to inject code into the monitor that will permit the user to seize control of the machine." (Page 61) Today, the monitor would be referred to as the kernel.
The earliest documented hostile exploitation of a buffer overflow was in 1988. It was one of several exploits used by the Morris worm to propagate itself over the Internet. The program exploited was a service on Unix called finger. Later, in 1995, Thomas Lopatic independently rediscovered the buffer overflow and published his findings on the Bugtraq security mailing list. A year later, in 1996, Elias Levy (aka Aleph One) published in Phrack magazine the paper "Smashing the Stack for Fun and Profit", a step-by-step introduction to exploiting stack-based buffer overflow vulnerabilities.
Since then, at least two major internet worms have exploited buffer overflows to compromise a large number of systems. In 2001, the Code Red worm exploited a buffer overflow in Microsoft's Internet Information Services (IIS) 5.0 and in 2003 the SQL Slammer worm compromised machines running Microsoft SQL Server 2000.
In 2003, buffer overflows present in licensed Xbox games have been exploited to allow unlicensed software, including homebrew games, to run on the console without the need for hardware modifications, known as modchips. The PS2 Independence Exploit also used a buffer overflow to achieve the same for the PlayStation 2. The Twilight hack accomplished the same with the Wii, using a buffer overflow in The Legend of Zelda: Twilight Princess.
See also
- "CORE-2007-0219: OpenBSD's IPv6 mbufs remote kernel buffer overflow". Retrieved 2007-05-15.
- "The Metasploit Opcode Database". Retrieved 2007-05-15.[dead link]
- "The Exploitant - Security info and tutorials". Retrieved 2009-11-29.
- "Microsoft Technet Security Bulletin MS04-028". Retrieved 2007-05-15.
- "Creating Arbitrary Shellcode In Unicode Expanded Strings" (PDF). Retrieved 2007-05-15.
- Vangelis (2004-12-08). Stack-based Overflow Exploit: Introduction to Classical and Advanced Overflow Technique (text). Wowhacker via Neworder.[dead link]
- Balaban, Murat. Buffer Overflows Demystified (text). Enderunix.org.[dead link]
- Akritidis, P.; Evangelos P. Markatos, M. Polychronakis, and Kostas D. Anagnostakis (2005). "STRIDE: Polymorphic Sled Detection through Instruction Sequence Analysis." (PDF). Proceedings of the 20th IFIP International Information Security Conference (IFIP/SEC 2005). IFIP International Information Security Conference. Retrieved 2012-03-04.
- Klein, Christian (2004-09). Buffer Overflow (PDF).
- Shah, Saumil (2006). "Writing Metasploit Plugins: from vulnerability to exploit" (PDF). Hack In The Box. Kuala Lumpur. Retrieved 2012-03-04.
- Intel 64 and IA-32 Architectures Software Developer’s Manual Volume 2A: Instruction Set Reference, A-M (PDF). Intel Corporation. 2007-05. pp. 3–508.[dead link]
- Alvarez, Sergio (2004-09-05). Win32 Stack BufferOverFlow Real Life Vuln-Dev Process (PDF). IT Security Consulting. Retrieved 2012-03-04.
- Ukai, Yuji; Soeder, Derek; Permeh, Ryan (2004). "Environment Dependencies in Windows Exploitation". BlackHat Japan. Japan: eEye Digital Security. Retrieved 2012-03-04.
- "The Better String Library".
- "The Vstr Homepage". Retrieved 2007-05-15.
- "The Erwin Homepage". Retrieved 2007-05-15.
- "CERT Secure Coding Initiative". Retrieved 2007-07-30.
- "Libsafe at FSF.org". Retrieved 2007-05-20.
- "StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks by Cowan et al." (PDF). Retrieved 2007-05-20.
- "ProPolice at X.ORG". Retrieved 2007-05-20.
- "Bypassing Windows Hardware-enforced Data Execution Prevention". Retrieved 2007-05-20.
- PointGuard: Protecting Pointers From Buffer Overflow Vulnerabilities
- Protecting Against Pointer Subterfuge (Kinda!)
- Defeating Compiler-Level Buffer Overflow Protection
- Protecting against Pointer Subterfuge (Redux)
- "PaX: Homepage of the PaX team". Retrieved 2007-06-03.
- "KernelTrap.Org". Retrieved 2007-06-03.
- "Openwall Linux kernel patch 2.4.34-ow1". Retrieved 2007-06-03.
- "Microsoft Technet: Data Execution Prevention".
- "BufferShield: Prevention of Buffer Overflow Exploitation for Windows". Retrieved 2007-06-03.
- "NGSec Stack Defender". Archived from the original on 2007-05-13. Retrieved 2007-06-03.
- "PaX at GRSecurity.net". Retrieved 2007-06-03.
- "Computer Security Technology Planning Study" (PDF). Retrieved 2007-11-02.
- ""A Tour of The Worm" by Donn Seeley, University of Utah". Archived from the original on 2007-05-20. Retrieved 2007-06-03.
- "Bugtraq security mailing list archive". Archived from the original on 2007-09-01. Retrieved 2007-06-03.
- ""Smashing the Stack for Fun and Profit" by Aleph One". Retrieved 2012-09-05.
- "eEye Digital Security". Retrieved 2007-06-03.
- "Microsoft Technet Security Bulletin MS02-039". Retrieved 2007-06-03.
- "Hacker breaks Xbox protection without mod-chip". Retrieved 2007-06-03.
- "Discovering and exploiting a remote buffer overflow vulnerability in an FTP server" by Raykoid666
- "Smashing the Stack for Fun and Profit" by Aleph One
- An Overview and Example of the Buffer-Overflow Exploit. pps. 16-21.
- CERT Secure Coding Standards
- CERT Secure Coding Initiative
- Secure Coding in C and C++
- SANS: inside the buffer overflow attack
- "Advances in adjacent memory overflows" by Nomenumbra
- A Comparison of Buffer Overflow Prevention Implementations and Weaknesses
- More Security Whitepapers about Buffer Overflows
- Chapter 12: Writing Exploits III[dead link] from Sockets, Shellcode, Porting & Coding: Reverse Engineering Exploits and Tool Coding for Security Professionals by James C. Foster (ISBN 1-59749-005-9). Detailed explanation of how to use Metasploit to develop a buffer overflow exploit from scratch.
- Computer Security Technology Planning Study, James P. Anderson, ESD-TR-73-51, ESD/AFSC, Hanscom AFB, Bedford, MA 01731 (October 1972) [NTIS AD-758 206]
- "Buffer Overflows: Anatomy of an Exploit" by Nevermore | http://en.wikipedia.org/wiki/Buffer_overrun | 13 |
62 | In mathematics, a multiplicative inverse or reciprocal for a number x, denoted by 1/x or x−1, is a number which when multiplied by x yields the multiplicative identity, 1. The multiplicative inverse of a fraction a/b is b/a. For the multiplicative inverse of a real number, divide 1 by the number. For example, the reciprocal of 5 is one fifth (1/5 or 0.2), and the reciprocal of 0.25 is 1 divided by 0.25, or 4. The reciprocal function, the function f(x) that maps x to 1/x, is one of the simplest examples of a function which is its own inverse (an involution).
The term reciprocal was in common use at least as far back as the third edition of Encyclopædia Britannica (1797) to describe two numbers whose product is 1; geometrical quantities in inverse proportion are described as reciprocall in a 1570 translation of Euclid's Elements.
In the phrase multiplicative inverse, the qualifier multiplicative is often omitted and then tacitly understood (in contrast to the additive inverse). Multiplicative inverses can be defined over many mathematical domains as well as numbers. In these cases it can happen that ab ≠ ba; then "inverse" typically implies that an element is both a left and right inverse.
The notation x−1 is sometimes also used for the inverse function, which usually is not equal to the multiplicative inverse. For example, 1/sin x = (sin x)−1 is very different from the inverse of sin x, denoted sin−1 x or arcsin x. Only for linear maps are they strongly related (see below). The terminology difference reciprocal versus inverse is not sufficient to make this distinction, since many authors prefer the opposite naming convention, probably for historical reasons (for example in French, the inverse function is preferably called application réciproque).
Examples and counterexamples
In the real numbers, zero does not have a reciprocal because no real number multiplied by 0 produces 1 (the product of any number with zero is zero). With the exception of zero, reciprocals of every real number are real, reciprocals of every rational number are rational, and reciprocals of every complex number are complex. The property that every element other than zero has a multiplicative inverse is part of the definition of a field, of which these are all examples. On the other hand, no integer other than 1 and -1 has an integer reciprocal, and so the integers are not a field.
In modular arithmetic, the modular multiplicative inverse of a is also defined: it is the number x such that ax ≡ 1 (mod n). This multiplicative inverse exists if and only if a and n are coprime. For example, the inverse of 3 modulo 11 is 4 because 4 · 3 ≡ 1 (mod 11). The extended Euclidean algorithm may be used to compute it.
The sedenions are an algebra in which every nonzero element has a multiplicative inverse, but which nonetheless has divisors of zero, i.e. nonzero elements x, y such that xy = 0.
A square matrix has an inverse if and only if its determinant has an inverse in the coefficient ring. The linear map that has the matrix A−1 with respect to some base is then the reciprocal function of the map having A as matrix in the same base. Thus, the two distinct notions of the inverse of a function are strongly related in this case, while they must be carefully distinguished in the general case (as noted above).
The trigonometric functions are related by the reciprocal identity: the cotangent is the reciprocal of the tangent; the secant is the reciprocal of the cosine; the cosecant is the reciprocal of the sine.
Complex numbers
As mentioned above, the reciprocal of every nonzero complex number z = a + bi is complex. It can be found by multiplying both top and bottom of 1/z by its complex conjugate and using the property that , the absolute value of z squared, which is the real number a2 + b2:
In particular, if ||z||=1 (z has unit magnitude), then . Consequently, the imaginary units, ±i, have additive inverse equal to multiplicative inverse, and are the only complex numbers with this property. For example, additive and multiplicative inverses of i are −(i) = −i and 1/i = −i, respectively.
For a complex number in polar form z = r(cos φ + i sin φ), the reciprocal simply takes the reciprocal of the magnitude and the negative of the angle:
The power rule for integrals (Cavalieri's quadrature formula) cannot be used to compute the integral of 1/x, as it is an exceptional case. Instead the integral is given by:
Computing the reciprocal is important in many division algorithms, since the quotient a/b can be computed by first computing 1/b and then multiplying it by a. Noting that has a zero at x = 1/b, Newton's method can find that zero, starting with a guess and iterating using the rule:
This continues until the desired precision is reached. For example, suppose we wish to compute 1/17 ≈ 0.0588 with 3 digits of precision. Taking x0 = 0.1, the following sequence is produced:
- x1 = 0.1(2 - 17 × 0.1) = 0.03
- x2 = 0.03(2 - 17 × 0.03) = 0.0447
- x3 = 0.0447(2 - 17 × 0.0447) ≈ 0.0554
- x4 = 0.0554(2 - 17 × 0.0554) ≈ 0.0586
- x5 = 0.0586(2 - 17 × 0.0586) ≈ 0.0588
A typical initial guess can be found by rounding b to a nearby power of 2, then using bit shifts to compute its reciprocal.
In constructive mathematics, for a real number x to have a reciprocal, it is not sufficient that x ≠ 0. There must instead be given a rational number r such that 0 < r < |x|. In terms of the approximation algorithm described above, this is needed to prove that the change in y will eventually become arbitrarily small.
This iteration can also be generalised to a wider sort of inverses, e.g. matrix inverses.
Reciprocals of irrational numbers
Every number excluding zero has a reciprocal, and reciprocals of certain irrational numbers often can prove useful for reasons linked to the irrational number in question. Examples of this are the reciprocal of e (≈ 0.367879), which is special because no other positive number can produce a lower number when put to the power of itself; and the golden ratio's reciprocal (≈ 0.618034), which is exactly one less than the golden ratio; the golden ratio is the only positive number with this property.
There are an infinite number of irrational reciprocal pairs that differ by an integer (giving the curious effect that the pairs share their infinite mantissa). These pairs can be found by simplifying n+√(n2+1) for any integer n, and taking the reciprocal. For example, n = 2 produces 2+√5, whose reciprocal 1/(2+√5) is -2+√5, exactly 4 less.
Further remarks
If the multiplication is associative, an element x with a multiplicative inverse cannot be a zero divisor (meaning for some y, xy = 0 with neither x nor y equal to zero). To see this, it is sufficient to multiply the equation xy = 0 by the inverse of x (on the left), and then simplify using associativity. In the absence of associativity, the sedenions provide a counterexample.
The converse does not hold: an element which is not a zero divisor is not guaranteed to have a multiplicative inverse. Within Z, all integers except −1, 0, 1 provide examples; they are not zero divisors nor do they have inverses in Z. If the ring or algebra is finite, however, then all elements a which are not zero divisors do have a (left and right) inverse. For, first observe that the map ƒ(x) = ax must be injective: ƒ(x) = ƒ(y) implies x = y:
Distinct elements map to distinct elements, so the image consists of the same finite number of elements, and the map is necessarily surjective. Specifically, ƒ (namely multiplication by a) must map some element x to 1, ax = 1, so that x is an inverse for a.
The expansion of the reciprocal 1/q in any base can also act as a source of pseudo-random numbers, if q is a "suitable" safe prime, a prime of the form 2p + 1 where p is also a prime. A sequence of pseudo-random numbers of length q − 1 will be produced by the expansion.
See also
- Division (mathematics)
- Fraction (mathematics)
- Group (mathematics)
- Ring (mathematics)
- Division algebra
- Exponential decay
- Unit fractions – reciprocals of integers
- Repeating decimal
- " In equall Parallelipipedons the bases are reciprokall to their altitudes". OED "Reciprocal" §3a. Sir Henry Billingsley translation of Elements XI, 34.
- Anthony, Dr. "Proof that INT(1/x)dx = lnx". Ask Dr. Math. Drexel University. Retrieved 22 March 2013.
- Mitchell, Douglas W., "A nonlinear random number generator with known, long cycle length," Cryptologia 17, January 1993, 55-62.
- Maximally Periodic Reciprocals, Matthews R.A.J. Bulletin of the Institute of Mathematics and its Applications vol 28 pp 147–148 1992 | http://en.wikipedia.org/wiki/Multiplicative_inverse | 13 |
55 | Measuring and Making Angles
On a map, you trace your route and come to a fork in the road. Two diverging roads split from a common point and form an angle. The point at which the roads diverge is the vertex. An angle separates the area around it, known in geometry as a plane, into two regions. The points inside the angle lie in the interior region of the angle, and the points outside the angle lie in the exterior region of the angle.
Once you get to know the types of angles and how to measure and create your own, you'll have picked up valuable geometry skills that will help you prove even the most complex geometric puzzles.
To do both tasks, you use a protractor, a very useful tool to keep around (see Figure 1).
When choosing a protractor, try to find one made of clear plastic. Figuring out the measure of an angle is easier because you can see the line for the angle through the protractor.
The breeds of angles
Several different angle breeds, or types, exist. You can figure out what breed of angle you have by its measure. The most common measure of an angle is in degrees. Here is a brief introduction to the four types of angles:
- Right angle. With this angle, you can never go wrong. The right angle is one of the most easily recognizable angles. It's in the form of the letter L, and it makes a square corner (see Figure 2). It has a measure of 90 degrees.
- Straight angle. You know what? It's actually a straight line. Most people don't even think of this type as an angle, but it is. A straight angle is made up of opposite rays or line segments that have a common endpoint (see Figure 3). This angle has a measure of 180 degrees.
- Right and straight angles are pretty easy to spot just by looking at them, but never jump to conclusions about the measure of an angle. Being cautious is best. If the info isn't written on the page, don't assume anything. Measure.
- Acute angle. It's the adorable angle.
- Actually, it's just a pinch. It's any angle that measures more than 0 degrees but less than 90 degrees. An acute angle falls somewhere between nonexistent and a right angle (see Figure 4).
- Obtuse angle. This type is just not as exciting as an acute angle. It's measure is somewhere between a right angle and a straight angle (see Figure 5). It is a hill you must climb, a mountain for you to summit. It has a measure of more than 90 degrees but less than 180 degrees.
Angles are most commonly measured by degrees, but for those of you who are sticklers for accuracy, even smaller units of measure can be used: minutes and seconds. These kinds of minutes and seconds are like the ones on a clock — a minute is bigger than a second. So think of a degree like an hour, and you've got it down: One degree equals 60 minutes. One minute equals 60 seconds.
Before measuring an angle, spec it out and estimate which type you think it is. Is it a right angle? A straight angle? Acute or obtuse? After you estimate it, then measure the angle. Follow these steps:
1. Place the notch or center point of your protractor at the point where the sides of the angle meet (the vertex).
2. Place the protractor so that one of the lines of the angle you want to measure reads zero (that's actually 0°).
Using the zero line isn't necessary because you can measure an angle by getting the difference in the degree measures of one line to the other. It's easier, however, to measure the angle when one side of it is on the zero line. Having one line on the zero line allows you to read the measurement directly off the protractor without having to do more math. (But if you're up for the challenge, knock yourself out.)
3. Read the number off the protractor where the second side of the angle meets the protractor.
Some more advice:
- Make sure that your measure is close to your estimate. Doing so tells you whether you chose the proper scale. If you were expecting an acute angle measure but got a seriously obtuse measure, you need to rethink the scale you used. Try the other one.
- If the sides of your angle don't reach the scale of your protractor, extend them so that they do. Doing so increases the accuracy of your measure.
- Remember that the measure of an angle is always a positive number.
So what do you do if your angle doesn't quite fit on the protractor's scale? Look at Figure 6 for an example. The angle in this figure has a measure of greater than 180°. Now what? Sorry, but in this case, you're going to have to expend a little extra energy. Yes, you have to do some math. These angles are known as reflex angles and they have a measure of greater than 180°.
Draw a line so that you have a straight line (see the extended dots on Figure 6). The measure of this portion of the angle is 180° because it's a straight angle. Now measure the angle that is formed by the extension line you just made and the second side of the original angle you want to measure. (If you get confused, just look at Figure 6.) Once you have the measure of the second angle, add that number to 180. The result is the total number of degrees of the angle. In Figure 6, 180° + 45° = 225°. | http://www.dummies.com/how-to/content/measuring-and-making-angles.html | 13 |
112 | Summarizing Your Data
So now you have collected your raw data, and you have results from multiple trials of your experiment. How do you go from piles of raw data to summaries that can help you analyze your data and support your conclusions?
Fortunately, there are mathematical summaries of your data that can convey a lot of information with just a few numbers. These summaries are called descriptive statistics. The following discussion is a brief introduction to the two types of descriptive statistics that are generally most useful:
- summaries that calculate the "middle" or "average" of your data; these are called measures of central tendency, and
- summaries that indicate the "spread" of the raw measurements around the average, called measures of dispersion.
Measures of Central Tendency: Mean, Median, and Mode
In most cases, the first thing that you will want to know about a group of measurements is the "average." But what, exactly, is the "average?" Is it the mathematical average of our measurements? Is it a kind of half-way point in our data set? Is it the outcome that happened most frequently? Actually, any of these three measures could conceivably be used to convey the central tendency of the data. Most often, the mathematical average or mean of the data is used, but two other measures, the median and mode are also sometimes used.
We'll use a plant growth experiment as an example. Let's say that the experiment was to test whether plants grown in soil with compost added would grow faster than plants grown in the same soil without compost. Let's imagine that we used six separate pots for each condition, with one plant per pot. (In many cases, your project will have more than six trials. We are using fewer trials to keep the illustration simpler.) One of the growth measures chosen was the number of leaves on each plant. Suppose that the following results were obtained:
|Plant Growth Without Compost |
(# of leaves/plant)
|Plant Growth With Compost
(# of leaves/plant)
The mean value is what we typically call the "average." You calculate the mean by adding up all of the measurements in a group and then dividing by the number of measurements. For the "without compost" case, the mean is 5, as you can see in the illustration below.
Median and Mode
The easiest way to find the median and the mode is to first sort each group of measurements in order, from the smallest to the largest. Here are the values sorted in order:
|Plant Growth Without Compost
(# of leaves/plant)
|Plant Growth With Compost
(# of leaves/plant)
The median is a value at the midpoint of the group. More explicitly,
exactly half of the values in the group are smaller than the median,
and the other half of the values in the group are greater than the
median. If there are an odd number of measurements, the median is
simply equal to the middle value of the group, when the values are
arranged in ascending order. If there are an even number of
measurements (as here), the median is equal to the mean of the two
middle values (again, when the values are arranged in ascending order).
For the "without compost" group, the median is equal to the mean of the
values of the 3rd and 4th values, which happen to be 4 and 5:
Notice that, by definition, three of the values (3, 4, and 4) are below the median, and the other three values (5, 6, and 8) are above the median. What is the median of the "with compost" group?
The mode is the value that appears most frequently in the group of measurements. For the "without compost" group, the mode is 4, because that value is repeated twice, while all of the other values are only represented once. What is the mode of the "with compost" group?
It is entirely possible for a group of data to have no mode at all,
or for it to have more than one mode. If all values occur with the same
frequency (for example, if all values occur only once), then the group
has no mode. If more than one value occurs at the highest frequency,
then each of those values is a mode. Here is an example of a group of
raw data with two modes:
The two modes of this data set are 26 and 41, since each of those values appears twice, while all the other values appear only once. A data set with two modes is sometimes called "bimodal." Multi-modal data sets are also possible.
Mean, Median, or Mode: Which Measure Should I Use?
What's the difference between these measures? When would you choose to use one in preference to another? The illustration below shows the mean, median, and mode of the "without compost" data sample on a graph. The x-axis shows the number of leaves per plant. The height of each bar (y-axis) shows the number of plants that had a certain number of leaves. (Compare the graph with the data in the table, and you will see that all of the raw data values are shown in the graph.) This graph shows why the mean, median, and mode are all called measures of central tendency. The data values are spread out across the horizontal axis of the graph, but the mean, median, and mode are all clustered towards the center. Each one is a slightly different measure of what happened "on average" in the experiment. The mode (4) shows which number of leaves per plant occurred most frequently. The median (4.5) shows the value that divides the data points in half; half of the values are lower and half of the values are higher than the median. The mean (5) is the arithmetic average of all the data points.
In general, the mean is the descriptive statistic most often used to describe the central tendency of a group of measurements. Of the three measures, it is the most sensitive measurement, because its value always reflects the contributions of each of the data values in the group. The median and the mode are less sensitive to "outliers"—data values at the extremes of a group. Imagine that, for the "without compost" group, the plant with the greatest number of leaves had 11 leaves, not 8. Both the median and the mode would remain unchanged. (Check for yourself and confirm that this is true.) The mean, however, would now be 5.5 instead of 5.0.
On the other hand, sometimes it is an advantage to have a measure of central tendency that is less sensitive to changes in the extremes of the data. For example, if your data set contains a small number of outliers at one extreme, the median may be a better measure of the central tendency of the data than the mean.
If your results involve categories instead of continuous numbers, then the best measure of central tendency will probably be the most frequent outcome (the mode). For example, imagine that you conducted a survey on the most effective way to quit smoking. A reasonable measure of the central tendency of your results would be the method that works most frequently, as determined from your survey.
It is important to think about what you are trying to accomplish with descriptive statistics, not just use them blindly. If your data contains more than one mode, then summarizing them with a simple measure of central tendency such as the mean or median will obscure this fact. Table 1, below, is a quick guide to help you decide which measure of central tendency to use with your data.
|First, what are you trying to describe?||Second, what does your data look like?||Then, the best measure of central tendency is...|
|Groups, or classes of things. Survey results often fall in this category, such as, "What is the most effective way to quit smoking?" or "Gender Differences in After-School Activities"||Mode. In these made-up survey results, 'cold turkey' is the most frequent response.|
|Position on a ranking scale, such as: 1-5 stars for movies, books, or restaurants||Median. The median movie ranking in this survey was 2.3 stars.|
|Measures on a linear scale (e.g., voltage, mass, height, money, etc.)||Mean. The shape of this data is approximately the same on the left and the right side of the graph, so we call this symmetrical data. For symmetrical data, the mean is the best measurement of central tendency. In this case the mean body mass is 178 grams.|
|Median. Notice how the data in this graph is non-symmetrical. The peak of the data is not centered, and the body mass values fall off more sharply on the left of the peak than on the right. When the peak is shifted like this to one side or the other, we call it skewed data. For skewed data, the median is the best choice to measure central tendency. The median body mass for this skewed population is 185 grams.|
|Notice how this graph has two peaks. We call data with two prominent peaks bimodal data. In the case of a bimodal distribution, you may have two populations, each with its own separate central tendency. Here one group has a mean body mass of 147 grams and the other has a mean body mass of 178 grams.|
|None. Notice how this graph has three peaks and lots of overlap between the tails of the peaks. We call this multimodal data. There is no single central tendency. It is easiest to describe data like this by referring to the graph. Don't use a measure of central tendency in this case, it would be misleading.|
|None. In this case, the data is scattered all over the place. In some cases, this may indicate that you need to collect more data. In this case there is no central tendency.|
Measures of Dispersion: Range, Variance, and Standard Deviation
Measures of central tendency describe the "average" of a data set.
Another important quality to measure is the "spread" of a data set. For
example, these two data sets both have the same mean (5):
data set 2: 1, 2, 4, 5, 7, 11 .
Although both data sets have the same mean, it is obvious that the values in data set 2 are much more scattered than the values in data set 1 (see the graphs, below). For which data set would you feel more comfortable using the average description of "5"? It would be nice to have another measure to describe the "spread" of a data set. Such a measure could let us know at a glance whether the values in a data set are generally close to or far from the mean.
The descriptive statistics that measure the quality of scatter are called measures of dispersion. When added to the measures of central tendency discussed previously, measures of dispersion give a more complete picture of the data set. We will discuss three such measurements: the range, the variance, and the standard deviation.
The range of a data set is the simplest of the three measures. The range is defined by the smallest and largest data values in the set. The range of data set 1 is 3–8. What is the range of data set 2?
The range gives only minimal information about the spread of the data, by defining the two extremes. It says nothing about how the data are distributed between those two endpoints. Two other related measures of dispersion, the variance and the standard deviation, provide a numerical summary of how much the data are scattered.
For more advanced material, see Variance & Standard Deviation. | http://www.sciencebuddies.org/science-fair-projects/project_data_analysis_summarizing_data.shtml | 13 |
98 | Warning: the HTML version of this document is generated from
Latex and may contain translation errors. In
particular, some mathematical expressions are not translated correctly.
5.1 Return values
Some of the built-in functions we have used, such as the math functions, have produced results. Calling the function generates a new value, which we usually assign to a variable or use as part of an expression.
e = math.exp(1.0)
But so far, none of the functions we have written has returned a value.
In this chapter, we are going to write functions that return values, which we will call fruitful functions, for want of a better name. The first example is area, which returns the area of a circle with the given radius:
We have seen the return statement before, but in a fruitful function the return statement includes a return value. This statement means: "Return immediately from this function and use the following expression as a return value." The expression provided can be arbitrarily complicated, so we could have written this function more concisely:
On the other hand, temporary variables like temp often make debugging easier.
Sometimes it is useful to have multiple return statements, one in each branch of a conditional:
Since these return statements are in an alternative conditional, only one will be executed. As soon as one is executed, the function terminates without executing any subsequent statements.
Code that appears after a return statement, or any other place the flow of execution can never reach, is called dead code.
In a fruitful function, it is a good idea to ensure that every possible path through the program hits a return statement. For example:
This program is not correct because if x happens to be 0, neither condition is true, and the function ends without hitting a return statement. In this case, the return value is a special value called None:
>>> print absoluteValue(0)
As an exercise, write a compare function that returns 1 if x > y, 0 if x == y, and -1 if x < y.
5.2 Program development
At this point, you should be able to look at complete functions and tell what they do. Also, if you have been doing the exercises, you have written some small functions. As you write larger functions, you might start to have more difficulty, especially with runtime and semantic errors.
To deal with increasingly complex programs, we are going to suggest a technique called incremental development. The goal of incremental development is to avoid long debugging sessions by adding and testing only a small amount of code at a time.
As an example, suppose you want to find the distance between two points, given by the coordinates (x1, y1) and (x2, y2). By the Pythagorean theorem, the distance is:
The first step is to consider what a distance function should look like in Python. In other words, what are the inputs (parameters) and what is the output (return value)?
In this case, the two points are the inputs, which we can represent using four parameters. The return value is the distance, which is a floating-point value.
Already we can write an outline of the function:
def distance(x1, y1, x2, y2):
Obviously, this version of the function doesn't compute distances; it always returns zero. But it is syntactically correct, and it will run, which means that we can test it before we make it more complicated.
To test the new function, we call it with sample values:
>>> distance(1, 2, 4, 6)
We chose these values so that the horizontal distance equals 3 and the vertical distance equals 4; that way, the result is 5 (the hypotenuse of a 3-4-5 triangle). When testing a function, it is useful to know the right answer.
At this point we have confirmed that the function is syntactically
correct, and we can start adding lines of code. After each
incremental change, we test the function again. If an error occurs at
any point, we know where it must be
A logical first step in the computation is to find the differences x2 - x1 and y2 - y1. We will store those values in temporary variables named dx and dy and print them.
def distance(x1, y1, x2, y2):
If the function is working, the outputs should be 3 and 4. If so, we know that the function is getting the right arguments and performing the first computation correctly. If not, there are only a few lines to check.
Next we compute the sum of squares of dx and dy:
def distance(x1, y1, x2, y2):
Notice that we removed the print statements we wrote in the previous step. Code like that is called scaffolding because it is helpful for building the program but is not part of the final product.
Again, we would run the program at this stage and check the output (which should be 25).
Finally, if we have imported the math module, we can use the sqrt function to compute and return the result:
def distance(x1, y1, x2, y2):
If that works correctly, you are done. Otherwise, you might want to print the value of result before the return statement.
When you start out, you should add only a line or two of code at a time. As you gain more experience, you might find yourself writing and debugging bigger chunks. Either way, the incremental development process can save you a lot of debugging time.
The key aspects of the process are:
As an exercise, use incremental development to write a function called hypotenuse that returns the length of the hypotenuse of a right triangle given the lengths of the two legs as arguments. Record each stage of the incremental development process as you go.
As you should expect by now, you can call one function from within another. This ability is called composition.
As an example, we'll write a function that takes two points, the center of the circle and a point on the perimeter, and computes the area of the circle.
Assume that the center point is stored in the variables xc and yc, and the perimeter point is in xp and yp. The first step is to find the radius of the circle, which is the distance between the two points. Fortunately, there is a function, distance, that does that:
radius = distance(xc, yc, xp, yp)
The second step is to find the area of a circle with that radius and return it:
result = area(radius)
Wrapping that up in a function, we get:
def area2(xc, yc, xp, yp):
We called this function area2 to distinguish it from the area function defined earlier. There can only be one function with a given name within a given module.
The temporary variables radius and result are useful for development and debugging, but once the program is working, we can make it more concise by composing the function calls:
def area2(xc, yc, xp, yp):
As an exercise, write a function slope(x1, y1, x2, y2) that returns the slope of the line through the points (x1, y1) and (x2, y2). Then use this function in a function called intercept(x1, y1, x2, y2) that returns the y-intercept of the line through the points (x1, y1) and (x2, y2).
5.4 Boolean functions
Functions can return boolean values, which is often convenient for hiding complicated tests inside functions. For example:
def isDivisible(x, y):
The name of this function is isDivisible. It is common to give boolean functions names that sound like yes/no questions. isDivisible returns either True or False to indicate whether the x is or is not divisible by y.
We can make the function more concise by taking advantage of the fact that the condition of the if statement is itself a boolean expression. We can return it directly, avoiding the if statement altogether:
def isDivisible(x, y):
This session shows the new function in action:
>>> isDivisible(6, 4)
Boolean functions are often used in conditional statements:
if isDivisible(x, y):
It might be tempting to write something like:
if isDivisible(x, y) == True:
But the extra comparison is unnecessary.
As an exercise, write a function isBetween(x, y, z) that returns True if y le x le z or False otherwise.
5.5 More recursion
So far, you have only learned a small subset of Python, but you might be interested to know that this subset is a complete programming language, which means that anything that can be computed can be expressed in this language. Any program ever written could be rewritten using only the language features you have learned so far (actually, you would need a few commands to control devices like the keyboard, mouse, disks, etc., but that's all).
Proving that claim is a nontrivial exercise first accomplished by Alan Turing, one of the first computer scientists (some would argue that he was a mathematician, but a lot of early computer scientists started as mathematicians). Accordingly, it is known as the Turing Thesis. If you take a course on the Theory of Computation, you will have a chance to see the proof.
To give you an idea of what you can do with the tools you have learned so far, we'll evaluate a few recursively defined mathematical functions. A recursive definition is similar to a circular definition, in the sense that the definition contains a reference to the thing being defined. A truly circular definition is not very useful:
If you saw that definition in the dictionary, you might be annoyed. On the other hand, if you looked up the definition of the mathematical function factorial, you might get something like this:
This definition says that the factorial of 0 is 1, and the factorial of any other value, n, is n multiplied by the factorial of n-1.
So 3! is 3 times 2!, which is 2 times 1!, which is 1 times 0!. Putting it all together, 3! equals 3 times 2 times 1 times 1, which is 6.
If you can write a recursive definition of something, you can usually write a Python program to evaluate it. The first step is to decide what the parameters are for this function. With little effort, you should conclude that factorial has a single parameter:
If the argument happens to be 0, all we have to do is return 1:
Otherwise, and this is the interesting part, we have to make a recursive call to find the factorial of n-1 and then multiply it by n:
The flow of execution for this program is similar to the flow of countdown in Section 4.9. If we call factorial with the value 3:
Since 3 is not 0, we take the second branch and calculate the factorial of n-1...
Since 2 is not 0, we take the second branch and calculate the factorial of n-1...
The return value (2) is multiplied by n, which is 3, and the result, 6, becomes the return value of the function call that started the whole process.
Here is what the stack diagram looks like for this sequence of function calls:
The return values are shown being passed back up the stack. In each frame, the return value is the value of result, which is the product of n and recurse.
5.6 Leap of faith
Following the flow of execution is one way to read programs, but it can quickly become labyrinthine. An alternative is what we call the "leap of faith." When you come to a function call, instead of following the flow of execution, you assume that the function works correctly and returns the appropriate value.
In fact, you are already practicing this leap of faith when you use built-in functions. When you call math.cos or math.exp, you don't examine the implementations of those functions. You just assume that they work because the people who wrote the built-in functions were good programmers.
The same is true when you call one of your own functions. For example,
in Section 5.4, we wrote a function called isDivisible
that determines whether one number is divisible by another. Once we
have convinced ourselves that this function is correct
The same is true of recursive programs. When you get to the recursive call, instead of following the flow of execution, you should assume that the recursive call works (yields the correct result) and then ask yourself, "Assuming that I can find the factorial of n-1, can I compute the factorial of n?" In this case, it is clear that you can, by multiplying by n.
5.7 One more example
In the previous example, we used temporary variables to spell out the steps and to make the code easier to debug, but we could have saved a few lines:
From now on, we will tend to use the more concise form, but we recommend that you use the more explicit version while you are developing code. When you have it working, you can tighten it up if you are feeling inspired.
After factorial, the most common example of a recursively defined mathematical function is fibonacci, which has the following definition:
Translated into Python, it looks like this:
def fibonacci (n):
If you try to follow the flow of execution here, even for fairly small values of n, your head explodes. But according to the leap of faith, if you assume that the two recursive calls work correctly, then it is clear that you get the right result by adding them together.
5.8 Checking types
What happens if we call factorial and give it 1.5 as an argument?
>>> factorial (1.5)
It looks like an infinite recursion. But how can that be? There is a
In the first recursive call, the value of n is 0.5. In the next, it is -0.5. From there, it gets smaller and smaller, but it will never be 0.
We have two choices. We can try to generalize the factorial function to work with floating-point numbers, or we can make factorial check the type of its argument. The first option is called the gamma function and it's a little beyond the scope of this book. So we'll go for the second.
We can use the built-in function isinstance to verify the type of the argument. While we're at it, we also make sure the argument is positive:
def factorial (n):
Now we have three base cases. The first catches nonintegers. The second catches negative integers. In both cases, the program prints an error message and returns a special value, -1, to indicate that something went wrong:
>>> factorial ("fred")
If we get past both checks, then we know that n is a positive integer, and we can prove that the recursion terminates.
This program demonstrates a pattern sometimes called a guardian. The first two conditionals act as guardians, protecting the code that follows from values that might cause an error. The guardians make it possible to prove the correctness of the code.
Warning: the HTML version of this document is generated from Latex and may contain translation errors. In particular, some mathematical expressions are not translated correctly. | http://www.greenteapress.com/thinkpython/thinkCSpy/html/chap05.html | 13 |
56 | Box Algebra , Boundary Mathematics,
Logic and Laws of Form
By Lou Kauffman
I. Section One
We will work with a formal system based on one symbol, a rectangle.
An expression in this system is a finite collection of non-overlapping rectangles in the plane. For example:
In an expression, one can say about any two rectangles whether one is inside or outside of the other.
We allow two rules of transformation on expressions.
1. Cancellation. Two nested rectangles with the inner rectangle empty and the space between the two rectangles empty, can be replaced by the absence of the two rectangles. That is they can be erased.
2. Condensation. Two empty adjacent rectangles can be replaced by one of them.
Proposition. Any finite expression in rectangles can be reduced by a series of applications of cancellation and condensation to either a single rectangle or to an empty plane.
The Problem: Write a proof of this proposition. Your proof should be clearly worded and it should explain why the hypothesis of finiteness for expressions is needed.
Hints: It is useful to use a concept of depth in an expression. I have labeled the spaces in the expression below by their depth. The depth of a space is the number of inward crossings needed to reach it.
There must be rectangles enclosing spaces of maximal depth in the expression. Such rectangles are empty (else they would not be deepest). Call such a rectangle a deepest rectangle. Now you should be able to show that a deepest rectangle either has an empty rectangle next to it (so that condensation is possible), or it is surrounded by a rectangle forming a nest of two rectangles (so cancellation is possible), or the deepest rectangle is the only rectangle in the expression. These remarks will provide an algorithm for reducing any given expression to either a rectangle or to an empty plane.
Further Work: If you are interested you can carry this theory further in the following ways.
1. Allow the opposite of condensation and cancellation in expressions. That is, allow creation and expansion where expansion allows you to put a new empty rectangle next to an empty rectangle, and creation allows you to create a nest of two rectangles in any empty space.
Note that in this example we used only creation and expansion.
Also, it should be apparent by now that we only care about the relationships of the rectangles not their sizes.
Another example. We can use combinations of creation, cancellation, expansion and condensation together.
We shall call two expressions equivalent if one can be obtained from another by a finite sequence of these moves (creation, cancellation, expansion and condensation).
We have shown that any expression is equivalent to either a rectangle or to the empty plane.
Problem. Show that the rectangle and the empty plane are not equivalent to one another.
Once you have shown this, it follows that the simplification of an expression to rectangle or empty plane is unique. We will call the
single rectangle the marked state and the empty plane the unmarked state. Thus every expression is equivalent to either the marked state or the unmarked state.
Hint: Find an independent way to evaluate an expression as marked or as unmarked, and then show that your method of evaluation does not change under the elementary equivalences of creation, cancellation, expansion and condensation.
2. One can then make an algebra in relation to this arithmetic just as we make algebra in relation to ordinary arithmetic. Given any expressions A and B, we let AB denote their juxtaposition in the plane. We let denote the result of putting a box around the expression A. Since each expression A or B represents either a marked or an unmarked value, the algebra that results will be an analog of Boolean algebra, or in other words an algebra of logic.
In fact we can interpret this box algebra for logic as follows:
(a) A box stands for the value TRUE. An empty space or a doubled box stands for the value FALSE.
(b) stands for NOT A.
(c) AB stands for A OR B.
We leave it as an exercise for you to see that
stands for A AND B and that
stands for A IMPLIES B.
With this dictionary one can learn to do logic using the box algebra. It has a number of interesting features. If you want to explore this topic further take a look at the book "Laws of Form" by G. Spencer-Brown where a version of this arithmetic and algebra first appeared.
A very similar algebra for logic was invented by the late nineteenth and early twentieth century logician and philosopher Charles Sanders Peirce.
II. Section Two -- A Dialogue
This section is a dialogue between Lou, the author of this piece, and George, an hypothetical reader.
George. I have been working with your box algebra for a while now and I have some questions. My first question is this: You said that we can cancel two empty boxes if they are adjacent to one another.
After awhile I realized that you probably meant that
are adjacent and that we should be allowed to condense them to a single box. But I also would like to condense the two large empty boxes in the following expression. Are they adjacent?
Lou. Those two boxes are not directly next to one another. I would not call them adjacent if they were tables in a restaurant.
However it is true that we can condense them and allow the move:
So we need a technical definition of "adjacent" that will work for our
purposes. We will say that two boxes are in the same space if it is possible to draw a path from the outside of one to the outside of the other that does not cross any boundaries in the expression. This path is not necessarily a straight line. So in the situation we are discussing we see
with the connection between these two boxes showing that they are in the same space. Now we shall simply define two boxes to be adjacent if they are in the same space and let it go at that. This notion of adjacency is not quite as local as the English use of the term, but it will work for our purposes.
George. Ok. I am satisfied with that, but I notice that another way we could handle adjacency would be to only allow the boxes to occur in rows. Thus I would take an expression like
and rewrite it (without using rules expect my ability to rearrange the boxes) to the form below.
Then I can take adjacency to mean that one box is directly to the right or the left of the other, and that neither box contains the other.
Lou. That is a good point! You are observing that as far as valuation of an expression is concerned, the only relationship we need to know about two boxes is whether one box is inside the other or not. Thus we can rearrange the boxes so that they line up horizontally, and proceed to do cancellation and condensation on them using direct adjacency. Notice that with respect to this ordering from left to right, you are assuming the commutative law. That is you are assuming that AB is the same as BA when AB denotes the result of juxtaposing two expressions in direct adjacency. In particular we have
This is of course perfectly compatible with our evaluations.
George. In fact, if we linearly order the expressions from left to right, then they are the same in structure as parentheses. I mean I can write [ [[ ]] ] instead of the left expression in boxes above.
Lou. That is certainly true and it is a good way to abbreviate these boxes and a good way to input these expressions into a computer. We can also use the parentheses to translate box algebra into more traditional forms of algebra. I will come back to this point in later sections of the discussion.
George. But I would like to know how you will prove that there is no way to transform the empty box to an empty space.
Lou. Did you try to prove it?
Lou. Well try to do it, and then we will talk about that.
III. The Dialogue Continued
George. I have been thinking about that problem, and I am confused. It seems quite obvious to me that there is no way to transform a box to an empty space, but how do you prove it?
Lou. Well first of all, it helps to see that if we had defined thinks differently then the catastrophe could happen! Consider a parenthesis based system with the rules
A. >< =
Then we have< >< > = < > = < > which is just our condensation/expansion rule. But look at this:
< > = << >> < > = << > >< >
= << > > = << >> = .
So in this system it is possible for the box (in parenthesis form) to disappear!
George. That was not fair! You divided the box into the left and right pieces and then let the pieces act separately!!
Lou. It is true. I did that. But it was an example of a formal system in which creation/cancellation, expansion/condensation occur and nevertheless the "box" can disappear. This example shows that we should make a proof for our box arithmetic.
George. All right. I will work on it.
Lou. Here is a hint. Think of a box expression as a signal processor. Suppose that there is a signal in each empty box, that every time the signal "goes out" into boxes of smaller depth they are inverted when they cross a box boundary.
Here I have illustrated this idea using two signal values. One value is n (for not marked). The other value is m (for marked). In the expression above, we give the deepest space the value n and watch the signal bubble upward. It crosses one boundary and becomes m. Then it crosses another boundary and becomes n, and finally a third boundary and becomes m in the space of depth zero. Incidentally, I will call the space of depth zero the shallowest space.
George. I see! And the final value is in fact the value of the expression consisting of the nest of three boxes. But what will you do with this one I wonder?
Here I have put unmarked signals in the deep empty spaces of the expression, and I have bubbled them up according to your rules as far as I could. But I end up with an m and an n in the same space at depth 1. What do I do now?
Lou. Well, if m means marked and n means unmarked, what would m next to n mean?
George. Oh! m next to n means marked next to unmarked but that would still be marked. I see. We can use the following rules:
mm = m
mn = nm = m
nn = n
When it is marked then it is marked. The only way to be unmarked is to be unmarked. Is this philosophy? Well look, I will continue to calculate:
So the calculation says that this expression is unmarked, and indeed if I transform it will go away!
Lou. Right. This is a method for calculating in a completely standard way a value of n or m for any expression. Now you can do the exercise to show that if you change the expression according to one of our rules, this value does not change. And clearly the single empty box has value m while the empty space has value n. Since m and n are distinct, this proves our result.
George. How do you know that m and n are distinct?
Lou. Careful! You are crossing a boundary with that question.
It is given that m and n are distinct symbols that we are calculating with. We see that any calculation with m's and n's is just a string of them like mmnmnnn. And the value of such a string is m if there is an m in the string. The value of the string is n is there is no m in the string. The use of m and n depends upon our ability to distinguish at all. If we could not make two distinct symbols to work with, then there would be no hope of doing any mathematics at all. In this sense the possibility of doing mathematics is the same as the possibility that there is a distinction.
George. Well. Where do we go from here?
Lou. Algebra and onward! But lets take a rest right now.
IV. The Dialogue Continues
George. Wait. I don't want to quit just yet! I have another question.
George. What about infinite expressions? I know that we cannot reduce them all to marked or unmarked states, but can't we study them anyway?
Lou. Certainly! Lets slow down a bit. Now the expression J that I am drawing here is perhaps the simplest example of an infinite expression. It is an infinitely descending nest of boxes.
There is no way to evaluate J using our rules, since J has no deepest space, and hence no instance of condensation or cancellation. Notice that I cannot actually write J. What I write contains those famous three dots in a deepest space. The three dots mean that you have to
go on in the same pattern "forever". This expression J is a mathematical idealization. It does not exist as an entity that can be fully written out. However, J does have a very nice description in terms of itself. We can write J equal to a box around itself!
In other words, this infinite expression J is sitting inside itself.
We say that J reenters its own indicational space.
From this reentering property you can see how J has no hope to be equal to either a marked state or an unmarked state. For suppose that J is marked. Then J would be equal to a box around a box, but this is unmarked! And if J is unmarked, then the equation will tell us that J is marked.
We already knew that J could not be reduced to a marked state or to an unmarked state, but now we know that it would be inconsistent to assign it such a value. If J is to be a value in logic, then it is a value that heads on out beyond true and false. J is analogous to numbers like the square root of minus one, numbers that cannot be part of the real line. such numbers are called complex or imaginary. Similarly, we shall call J an imaginary (logical) value!
George. I see what happened to you. You fell into the thinking that mathematicians always fall into. You think that the infinite nest of marks in J exists "all at once" in some timeless domain. And then you look inside the mark around the outside of J and see that inifinity in the same pattern. And so you get your equation.
That is fine for idealists, but I am going to take a different view about what you just told me. I am going to to understand that an equals sign in the form A = B really means "replace A by B". That is the way I use it when I am programming a computer. The mind is just another computer. You think things can be "equal". I deny it!
The equals sign just means that one "thing" can be replaced by another thing and in either order, unless one decides upon an order.
Now lets see what your equation says. Suppose J = , then the equation says " replace a box by a box around a box" or "replace a box around a box by a box around that." I will emphasize this by writing a double arrow instead of that equals sign.
But the equation does not say that you cannot do it again so I really should write as follows.
If you follow the process to the right, it starts building the infinite form.
Now I know it never gets there, but is this not a much better and more concrete
view of the situation? The clock ticks, and at each tick a new box is added to
surround the old nest of boxes.
At each tick of the clock the value of the expression changes
from marked, to unmarked, to marked , to unmarked, and so on.
Your J is just an oscillator. It is no more paradoxical than a doorbell or a buzzer. Furthermore J is either marked or unmarked at any given time! You got confused when confronted with temporality. You want to collapse the passage of time into an eternal and everlasting geometry of the present.
Lou. There is certainly merit in what you say, but I do want to treat the algebra in a way that does not depend upon time. If we take J to satisfy the equation and use the usual rules of substitution and replacement in algebra, then it is easy to arrive at a contradiction. For example, lets suppose that J satisfies all the rules that finite expressions satisfy at the algebraic level. Then since
for any P, we have .
But if and , then and this is a contradiction. That is, if we attempt to include J in this standard way, then the whole system will collapse!
George. You are right, and I realize that one way to handle this situation would be to give up some of the algebraic rules that hold for finite expressions. However, there is another way due to Jim Flagg. I am sure you know it. In the Flagg Resolution we say that while , there is only one J and so if you change J to J with a box around it somewhere, then you must make this change everywhere! With this principle in mind, you cannot make the step
The best you can do is
and this will hardly lead to any contradictions!
The Flagg resolution lets us keep the basic algebra and still include temporal expressions like J. Saying that there is only one J and that substitutions must be performed everywhere is way to include the temporality in the algebra.
Lou. I like the Flagg Resolution. We should explore the world of infinite expressions using it.
George. Thanks. Lets do that. | http://homepages.math.uic.edu/~kauffman/Arithmetic.htm | 13 |
64 | On arcs and angles
For the sake of comparison I shall give two presentations of (almost) the same theory. It is about a little corner of Euclidean geometry, relating the size of angles to the length of arcs of circles. When measuring the length of such arcs, we take the radius of the circle as our unit of length.
I shall first present the theory as I remember it from half a century ago.
Lemma 0 was about angles with their vertex at C (= the centre of the circle) and states that in fig. 0
or “equal angles, equal arcs”. This theorem was proved via the congruence △ACA´ ≌ △BCB´, and allows us to introduce the radian as unit of angle size: the size of an angle with vertex at C equals the length of the corresponding arc. This last formulation I consider a rephrasing of Lemma 0 :
Lemma 1 was concerned with the size of ∠APA´ with all three points on the circle: it states
The proof identifies two isosceles triangles and then establishes ∠ACA´ = 2 ∗ ∠APA´ ; with Lemma 0, Lemma 1 then follows.
The mathematics is already getting slightly unattractive, for the argument based on fig. 1 is not immediately applicable to fig. 2. This can be remedied by giving the analogous argument for fig. 2, but we are then still left with the question of whether fig. 1 and fig. 2 cover all cases.
Lemma 2 deals with the case that the vertex is an arbitrary point P inside or on the circle, as in fig. 3: it states
The proof is by observing
∠APB = ∠B´BA´ + ∠BA´A
and then applying Lemma 1 twice. Here we didn’t struggle with the position of the circle’s centre, but do face the question of P lying outside the circle:
My highschool textbook solved this problem by introducing a new theorem —Lemma 3— stating that in fig. 4
and proving this theorem afresh (again by drawing the auxiliary line A´B).
The above is not too satisfactory for a variety of reasons.
- Lemmata 0 and 1 are special cases of Lemma 2.
- Lemmata 2 and 3 ask to be merged into a single lemma, e.g. by the introduction of something like a negative arc.
- There is no uniform, nonpictorial definition of which arcs have to be added or subtracted to yield which angle.
A closer inspection tells us that the notion of an angle, as used thus far, is probably too crude for our purposes.
For instance, the traditional definition of a circle is the locus of all points at given distance from a given point; Lemma 2 invited the alternative definition as the locus of points from where a given line segment is seen under the same angle or something like that, but our definition should reject fig. 5 as our circle thus defined! (The vertical line is the segment in question.)
Our notion of “angle” as used thus far is based on the notion of the angle between two rays, both starting at the vertex, as in fig. 6. It is a symmetric function of the two rays that delineate it.
We now present an alternative. We don’t know its original inventor(s) but do know that no one has promoted it more vigorously or explored it more extensively than S-C Chou, X-S Gao, & J-Z Zhang in their book Machine Proofs in Geometry .
We tentatively call it “line angle” because it associates an angle not with two rays but with two full lines, each of them extending in both directions to infinity. (This is perhaps why Chou, Gao, & Zhang speak of “full angles”.) We denote it —equally tentatively— by the infix “⌿”. It is not a symmetric function of its arguments: in fig. 7,
p⌿q = α and q⌿p = β . In words: p⌿q is the angle over which p must be rotated clockwise so as to make it parallel to q. The picture in fig. 7 strongly suggests p⌿q + q⌿p = π, but that is an equality we shall return to later.
What we have gained can be seen by observing how the notion of the line angle does away with the anomaly signalled in fig. 5. Consider two given points P and Q, and line p through P and line q through Q such that for some given α, p⌿q = α; the locus of the intersection point of p and q is a circle through P and Q:
In short, the problem is solved by distinguishing the endpoints of the line segment —e.g. by naming them differently— and subsequently distinguishing the lines of a pair by the endpoint through which they pass. The convincing elegance reflected in fig. 8 is strong evidence in favour of the notion of the line angle.
We now turn our attention to fig. 3 and fig. 4, and observe that in fig. 3, in which the arcs had to be added, the arcs from A to B and from A´ to B´ are both in the clockwise direction, whereas in fig.4, where we had to take the difference of the arc lengths, one of the two —viz. from A´ to B´— goes counter-clockwise. This observation invites the introduction of the notion of the directed arc, tentatively denoted by
We still have to decide in which direction we count the arc length as positive. We decide that when the arc from P to Q goes in the clockwise direction, the value of
Clocks being what they are, arcs
Let line a intersect a circle in point A and A´; let line b intersect that same circle in points B and B´. Then
(0) a⌿b = (
In the above there is one issue I skimmed over. Its first manifestation is with the definition of the directed arc. The notion
Its second manifestation is with the formulation of the theorem: if line a intersects a circle, there are two points of intersection, which is A and which A´? The answer is again “either”: the difference it makes in the sum
is, again, an integer multiple of 2π, i.e. ignorable. (Observe the following calculation, be it written down with some notational licence:
Its third manifestation is with the definition of the line angle: in fig. 7 we defined p⌿q to be equal to α, the clockwise rotation from p to q. But the alternative would have been a counter-clockwise rotation over β, or, equivalently, a clockwise rotation over – β. We observe π–β = α, in general: real values that differ by an integer multiple of π represent the same line angle. With directed arcs being defined up to multiples of 2π, formula (0) now makes perfect sense, thanks to the division by 2.
Finally, a single argument to prove lemmata 1, 2 and 3. Let us move, as in fig. 10, a line parallel to itself; the observation is that, for reasons of symmetry —and here you may be as explicit as you like— the one point of intersection moves in the counter-clockwise direction exactly as far as the other point of intersection does in the clockwise direction. Hence, if we move the line pair (a,b) parallel to itself, not only a⌿b is constant, but also
* * *
Having seen the alternative, the reader should realize how clumsy a theory I learned when I was young: when using Lemma 2 and/or Lemma 3, one either has to show that the intersection point lies inside/outside the circle, or one has to make a case analysis.
The point I wanted to make is that in designing a mathematical theory, the choice of concepts and definitions is crucial. I thought this was a very nice and simple example with which to drive the message home. Somehow, this note got longer than I had expected.
|||Machine Proofs in Geometry, Shang-Ching Chou, Xiao-Shan Gao, Jing-Zhong Zhang, World Scientific Publishing Co, Singapore, 1994|
Austin, 24 December 1994 | http://www.cs.utexas.edu/users/EWD/transcriptions/EWD11xx/EWD1193.html | 13 |
304 | Area is a quantity that expresses the extent of a two-dimensional surface or shape, or planar lamina, in the plane. Area can be understood as the amount of material with a given thickness that would be necessary to fashion a model of the shape, or the amount of paint necessary to cover the surface with a single coat. It is the two-dimensional analog of the length of a curve (a one-dimensional concept) or the volume of a solid (a three-dimensional concept).
The area of a shape can be measured by comparing the shape to squares of a fixed size. In the International System of Units (SI), the standard unit of area is the square metre (written as m2), which is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the same area as three such squares. In mathematics, the unit square is defined to have area one, and the area of any other shape or surface is a dimensionless real number.
There are several well-known formulas for the areas of simple shapes such as triangles, rectangles, and circles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles. For shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a major motivation for the historical development of calculus.
For a solid shape such as a sphere, cone, or cylinder, the area of its boundary surface is called the surface area. Formulas for the surface areas of simple shapes were computed by the ancient Greeks, but computing the surface area of a more complicated shape usually requires multivariable calculus.
Area plays an important role in modern mathematics. In addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, though not every subset is measurable. In general, area in higher mathematics is seen as a special case of volume for two-dimensional regions.
Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers. It can be proved that such a function exists.
Formal definition
An approach to defining what is meant by "area" is through axioms. "Area" can be defined as a function from a collection M of special kind of plane figures (termed measurable sets) to the set of real numbers which satisfies the following properties:
- For all S in M, a(S) ≥ 0.
- If S and T are in M then so are S ∪ T and S ∩ T, and also a(S∪T) = a(S) + a(T) − a(S∩T).
- If S and T are in M with S ⊆ T then T − S is in M and a(T−S) = a(T) − a(S).
- If a set S is in M and S is congruent to T then T is also in M and a(S) = a(T).
- Every rectangle R is in M. If the rectangle has length h and breadth k then a(R) = hk.
- Let Q be a set enclosed between two step regions S and T. A step region is formed from a finite union of adjacent rectangles resting on a common base, i.e. S ⊆ Q ⊆ T. If there is a unique number c such that a(S) ≤ c ≤ a(T) for all such step regions S and T, then a(Q) = c.
It can be proved that such an area function actually exists.
Every unit of length has a corresponding unit of area, namely the area of a square with the given side length. Thus areas can be measured in square metres (m2), square centimetres (cm2), square millimetres (mm2), square kilometres (km2), square feet (ft2), square yards (yd2), square miles (mi2), and so forth. Algebraically, these units can be thought of as the squares of the corresponding length units.
The conversion between two square units is the square of the conversion between the corresponding length units. For example, since
the relationship between square feet and square inches is
- 1 square foot = 144 square inches,
where 144 = 122 = 12 × 12. Similarly:
- 1 square kilometer = 1,000,000 square meters
- 1 square meter = 10,000 square centimetres = 1,000,000 square millimetres
- 1 square centimetre = 100 square millimetres
- 1 square yard = 9 square feet
- 1 square mile = 3,097,600 square yards = 27,878,400 square feet
- 1 square inch = 6.4516 square centimetres
- 1 square foot = 0.09290304 square metres
- 1 square yard = 0.83612736 square metres
- 1 square mile = 2.589988110336 square kilometres
Other units
- 1 are = 100 square metres
- 1 hectare = 100 ares = 10,000 square metres = 0.01 square kilometres
The acre is also commonly used to measure land areas, where
- 1 acre = 4,840 square yards = 43,560 square feet.
An acre is approximately 40% of a hectare.
- 1 barn = 10−28 square meters.
- 20 Dhurki = 1 Dhur
- 20 Dhur = 1 Khatha
- 20 Khata = 1 Bigha
- 32 Khata = 1 Acre
Area formulae
Polygon Formulae
- A = lw (rectangle)
- A = s2 (square)
The formula for the area of a rectangle follows directly from the basic properties of area, and is sometimes taken as a definition or axiom. On the other hand, if geometry is developed before arithmetic, this formula can be used to define multiplication of real numbers.
Dissection formulae
For an example, any parallelogram can be subdivided into a trapezoid and a right triangle, as shown in figure to the left. If the triangle is moved to the other side of the trapezoid, then the resulting figure is a rectangle. It follows that the area of the parallelogram is the same as the area of the rectangle:
- A = bh (parallelogram).
However, the same parallelogram can also be cut along a diagonal into two congruent triangles, as shown in the figure to the right. It follows that the area of each triangle is half the area of the parallelogram:
Area of curved shapes
The formula for the area of a circle (more properly called area of a disk) is based on a similar method. Given a circle of radius r, it is possible to partition the circle into sectors, as shown in the figure to the right. Each sector is approximately triangular in shape, and the sectors can be rearranged to form and approximate parallelogram. The height of this parallelogram is r, and the width is half the circumference of the circle, or πr. Thus, the total area of the circle is r × πr, or πr2:
- A = πr2 (circle).
Though the dissection used in this formula is only approximate, the error becomes smaller and smaller as the circle is partitioned into more and more sectors. The limit of the areas of the approximate parallelograms is exactly πr2, which is the area of the circle.
This argument is actually a simple application of the ideas of calculus. In ancient times, the method of exhaustion was used in a similar way to find the area of the circle, and this method is now recognized as a precursor to integral calculus. Using modern methods, the area of a circle can be computed using a definite integral:
Surface area
Most basic formulae for surface area can be obtained by cutting surfaces and flattening them out. For example, if the side surface of a cylinder (or any prism) is cut lengthwise, the surface can be flattened out into a rectangle. Similarly, if a cut is made along the side of a cone, the side surface can be flattened out into a sector of a circle, and the resulting area computed.
The formula for the surface area of a sphere is more difficult to derive: because the surface of a sphere has nonzero Gaussian curvature, it cannot be flattened out. The formula for the surface area of a sphere was first obtained by Archimedes in his work On the Sphere and Cylinder. The formula is:
- A = 4πr2 (sphere).
where r is the radius of the sphere. As with the formula for the area of a circle, any derivation of this formula inherently uses methods similar to calculus.
General formulae
Areas of 2-dimensional figures
- A triangle: (where B is any side, and h is the distance from the line on which B lies to the other vertex of the triangle). This formula can be used if the height h is known. If the lengths of the three sides are known then Heron's formula can be used: where a, b, c are the sides of the triangle, and is half of its perimeter. If an angle and its two included sides are given, the area is where C is the given angle and a and b are its included sides. If the triangle is graphed on a coordinate plane, a matrix can be used and is simplified to the absolute value of . This formula is also known as the shoelace formula and is an easy way to solve for the area of a coordinate triangle by substituting the 3 points (x1,y1), (x2,y2), and (x3,y3). The shoelace formula can also be used to find the areas of other polygons when their vertices are known. Another approach for a coordinate triangle is to use Infinitesimal calculus to find the area.
- A simple polygon constructed on a grid of equal-distanced points (i.e., points with integer coordinates) such that all the polygon's vertices are grid points: , where i is the number of grid points inside the polygon and b is the number of boundary points. This result is known as Pick's theorem.
Area in calculus
- The area between a positive-valued curve and the horizontal axis, measured between two values a and b (b is defined as the larger of the two values) on the horizontal axis, is given by the integral from a to b of the function that represents the curve:
- The area between the graphs of two functions is equal to the integral of one function, f(x), minus the integral of the other function, g(x):
- where is the curve with the greater y-value.
(see Green's theorem) or the z-component of
Surface area of 3-dimensional figures
- cone: , where r is the radius of the circular base, and h is the height. That can also be rewritten as or where r is the radius and l is the slant height of the cone. is the base area while is the lateral surface area of the cone.
- cube: , where s is the length of an edge.
- cylinder: , where r is the radius of a base and h is the height. The 2r can also be rewritten as d, where d is the diameter.
- prism: 2B + Ph, where B is the area of a base, P is the perimeter of a base, and h is the height of the prism.
- pyramid: , where B is the area of the base, P is the perimeter of the base, and L is the length of the slant.
- rectangular prism: , where is the length, w is the width, and h is the height.
General formula
The general formula for the surface area of the graph of a continuously differentiable function where and is a region in the xy-plane with the smooth boundary:
List of formulae
There are formulae for many different regular and irregular polygons, and those additional to the ones above are listed here.
|Regular triangle (equilateral triangle)||is the length of one side of the triangle.|
|Triangle||is half the perimeter, , and are the length of each side.|
|Triangle||and are any two sides, and is the angle between them.|
|Triangle||and are the base and altitude (measured perpendicular to the base), respectively.|
|Rhombus||and are the lengths of the two diagonals of the rhombus.|
|Parallelogram||is the length of the base and is the perpendicular height.|
|Trapezoid||and are the parallel sides and the distance (height) between the parallels.|
|Regular hexagon||is the length of one side of the hexagon.|
|Regular octagon||is the length of one side of the octagon.|
|Regular polygon||is the side length and is the number of sides.|
|Regular polygon||is the perimeter and is the number of sides.|
|Regular polygon||is the radius of a circumscribed circle, is the radius of an inscribed circle, and is the number of sides.|
|Regular polygon||is the apothem, or the radius of an inscribed circle in the polygon, and is the perimeter of the polygon.|
|Circle||is the radius and the diameter.|
|Circular sector||and are the radius and angle (in radians), respectively and is the length of the perimeter.|
|Ellipse||and are the semi-major and semi-minor axes, respectively.|
|Total surface area of a cylinder||and are the radius and height, respectively.|
|Lateral surface area of a cylinder||and are the radius and height, respectively.|
|Total surface area of a sphere||and are the radius and diameter, respectively.|
|Total surface area of a pyramid||is the base area, is the base perimeter and is the slant height.|
|Total surface area of a pyramid frustum||is the base area, is the base perimeter and is the slant height.|
|Square to circular area conversion||is the area of the square in square units.|
|Circular to square area conversion||is the area of the circle in circular units.|
The above calculations show how to find the area of many common shapes.
See also
- Equi-areal mapping
- Orders of magnitude (area)—A list of areas by size.
- Planimeter, an instrument for measuring small areas, e.g. on maps.
- Eric W. Weisstein. "Area". Wolfram MathWorld. Retrieved 3 July 2012.
- "Area Formulas". Math.com. Retrieved 2 July 2012.
- Bureau International des Poids et Mesures Resolution 12 of the 11th meeting of the CGPM (1960), retrieved 15 July 2012
- Mark de Berg; Marc van Kreveld; Mark Overmars; Otfried Schwarzkopf (2000). "Chapter 3: Polygon Triangulation". Computational Geometry (2nd revised ed.). Springer-Verlag. pp. 45–61. ISBN 3-540-65620-0
- Boyer, Carl B. (1959). A History of the Calculus and Its Conceptual Development. Dover. ISBN 0-486-60509-4.
- Eric W. Weisstein. "Surface Area". Wolfram MathWorld. Retrieved 3 July 2012.
- do Carmo, Manfredo. Differential Geometry of Curves and Surfaces. Prentice-Hall, 1976. Page 98, ISBN 978-0-13-212589-5
- Walter Rudin, Real and Complex Analysis, McGraw-Hill, 1966, ISBN 0-07-100276-6.
- Gerald Folland, Real Analysis: modern techniques and their applications, John Wiley & Sons, Inc., 1999,Page 20,ISBN 0-471-31716-0
- Moise, Edwin (1963). Elementary Geometry from an Advanced Standpoint. Addison-Wesley Pub. Co. Retrieved 15 July 2012.
- Bureau international des poids et mesures (2006). The International System of Units (SI). 8th ed. Retrieved 2008-02-13. Chapter 5.
- Braden, Bart (September 1986). "The Surveyor's Area Formula". The College Mathematics Journal 17 (4): 326–337. doi:10.2307/2686282. Retrieved 15 July 2012.
- Trainin, J. (2007-11). "An elementary proof of Pick's theorem". Mathematical Gazette 91 (522): 536–540.
- Eric W. Weisstein. "Cone". Wolfram MathWorld. Retrieved 6 July 2012.
|Wikimedia Commons has media related to: Area|
|Look up area in Wiktionary, the free dictionary.| | http://en.wikipedia.org/wiki/Area | 13 |
64 | A swept wing is a wing planform favored for high subsonic and supersonic jet speeds, and was first investigated in Germany from 1935 onwards until the end of the Second World War. Since the introduction of the MiG-15 and North American F-86 which demonstrated a decisive superiority over the slower first generation of straight-wing jet fighters during the Korean War, swept wings have become almost universal on all but the slowest jets (such as the A-10). Compared with straight wings common to propeller-powered aircraft, they have a "swept" wing root to wingtip direction angled beyond (usually aftward) the spanwise axis. This has the effect of delaying the drag rise caused by fluid compressibility near the speed of sound as swept wing fighters such as the F-86 were among the first to be able to exceed the speed of sound in a slight dive, and later in level flight.
Unusual variants of this design feature are forward sweep, variable sweep wings and pivoting wings. Swept wings as a means of reducing wave drag were first used on jet fighter aircraft, although many propeller-driven aircraft now also use the wing plan.
The angle of sweep which characterizes a swept wing is conventionally measured along the 25% chord line. If the 25% chord line varies in sweep angle, the leading edge is used; if that varies, the sweep is expressed in sections (e.g., 25 degrees from 0 to 50% span, 15 degrees from 50% to wingtip). Angle of sweep equals 1/2[180 deg - (nose angle)].
Subsonic and transonic behavior
As an aircraft enters the transonic speeds just below the speed of sound, an effect known as wave drag starts to appear. Using conservation of momentum principles in the direction normal to surface curvature, airflow accelerates around curved surfaces, and near the speed of sound the acceleration can cause the airflow to reach supersonic speeds. When this occurs, an oblique shock wave is generated at the point where the flow slows down back to subsonic speed. Since this occurs on curved areas, they are normally associated with the upper surfaces of the wing, the cockpit canopy, and the nose cone of the aircraft, areas with the highest local curvature.
Shock waves require energy to form. This energy is taken out of the aircraft, which has to supply extra thrust to make up for this energy loss. Thus the shocks are seen as a form of drag. Since the shocks form when the local air velocity reaches supersonic speeds over various features of the aircraft, there is a certain "critical mach" speed (or drag divergence mach number) where this effect becomes noticeable. This is normally when the shocks start generating over the wing, which on most aircraft is the largest continually curved surface, and therefore the largest contributor to this effect.
One of the simplest and best explanations of how the swept wing works was offered by Robert T. Jones: "Suppose a cylindrical wing (constant chord, incidence, etc.) is placed in an airstream at an angle of yaw - ie., it is swept back. Now, even if the local speed of the air on the upper surface of the wing becomes supersonic, a shock wave cannot form there because it would have to be a sweptback shock - swept at the same angle as the wing - ie., it would be an oblique shock. Such an oblique shock cannot form until the velocity component normal to it becomes supersonic."
One limiting factor in swept wing design is the so-called "middle effect". If a swept wing is continuous - an oblique swept wing, the pressure iso-bars will be swept at a continuous angle from tip to tip. However, if the left and right halves are swept back equally, as is common practice, the pressure iso-bars on the left wing in theory will meet the pressure iso-bars of the right wing on the centerline at a large angle. As the iso-bars cannot meet in such a fashion, they will tend to curve on each side as they near the centerline, so that the iso-bars cross the centerline at right angles to the centerline. This causes an "unsweeping" of the iso-bars in the wing root region. To combat this unsweeping, German aerodynamicist Dietrich Küchemann proposed and had tested a local indentation of the fuselage above and below the wing root. This proved to not be very effective. During the development of the Douglas DC-8 airliner, uncambered airfoils were used in the wing root area to combat the unsweeping. Similarly, a decambered wing root glove was added to the Boeing 707 wing to create the Boeing 720.
Swept wings for the transonic range
Tu-95 propeller-driven bomber with swept wings, cruise speed 710 km/h
KC-10 Extender, cruise speed: 908 km/h
HFB-320 Hansa Jet with forward swept wings, cruise speed: 825 km/h
Supersonic behavior
Airflow at supersonic speeds generates lift through the formation of shock waves, as opposed to the patterns of airflow over and under the wing. These shock waves, as in the transonic case, generate large amounts of drag. One of these shock waves is created by the leading edge of the wing, but contributes little to the lift. In order to minimize the strength of this shock it needs to remain "attached" to the front of the wing, which demands a very sharp leading edge. To better shape the shocks that will contribute to lift, the rest of an ideal supersonic airfoil is roughly diamond-shaped in cross-section. For low-speed lift these same airfoils are very inefficient, leading to poor handling and very high landing speeds.
One way to avoid the need for a dedicated supersonic wing is to use a highly swept subsonic design. Airflow behind the shock waves of a moving body are reduced to subsonic speeds. This effect is used within the intakes of engines meant to operate in the supersonic, as jet engines are generally incapable of ingesting supersonic air directly. This can also be used to reduce the speed of the air as seen by the wing, using the shocks generated by the nose of the aircraft. As long as the wing lies behind the cone-shaped shock wave, it will "see" subsonic airflow and work as normal. The angle needed to lie behind the cone increases with increasing speed, at Mach 1.3 the angle is about 45 degrees, at Mach 2.0 it is 60 degrees. For instance, at Mach 1.3 the angle of the Mach cone formed off the body of the aircraft will be at about sinμ = 1/M (μ is the sweep angle of the Mach cone)
Generally it is not possible to arrange the wing so it will lie entirely outside the supersonic airflow and still have good subsonic performance. Some aircraft, like the English Electric Lightning or Convair F-106 Delta Dart are tuned entirely for high-speed flight and feature highly swept planforms without regard to the low-speed problems this creates. In other cases the use of variable geometry wings, as on the Grumman F-14 Tomcat, allows an aircraft to move the wing to keep it at the most efficient angle regardless of speed, although the cost in complexity and weight makes this a rare feature.
Most high-speed aircraft have a wing that spends at least some of its time in the supersonic airflow. But since the shock cone moves towards the fuselage with increased speed (that is, the cone becomes narrower), the portion of the wing in the supersonic flow also changes with speed. Since these wings are swept, as the shock cone moves inward, the lift vector moves forward as the outer, rearward portions of the wing are generating less lift. This results in powerful pitching moments and their associated required trim changes.
When a swept wing travels at high speed, the airflow has little time to react and simply flows over the wing almost straight from front to back. At lower speeds the air does have time to react, and is pushed spanwise by the angled leading edge, towards the wing tip. At the wing root, by the fuselage, this has little noticeable effect, but as one moves towards the wingtip the airflow is pushed spanwise not only by the leading edge, but the spanwise moving air beside it. At the tip the airflow is moving along the wing instead of over it, a problem known as spanwise flow.
The lift from a wing is generated by the airflow over it from front to rear. With increasing span-wise flow the boundary layers on the surface of the wing have longer to travel, and so are thicker and more susceptible to transition to turbulence or flow separation, also the effective aspect ratio of the wing is less and so air "leaks" around the wing tips reducing their effectiveness. The spanwise flow on swept wings produces airflow that moves the stagnation point on the leading edge of any individual wing segment further beneath the leading edge, increasing effective angle of attack of wing segments relative to its neighbouring forward segment. The result is that wing segments farther towards the rear operate at increasingly higher angles of attack promoting early stall of those segments. This promotes tip stall on back swept wings, as the tips are most rearward, while delaying tip stall for forward swept wings, where the tips are forward. With both forward and back swept wings, the rear of the wing will stall first. This creates a nose-up pressure on the aircraft. If this is not corrected by the pilot it causes the plane to pitch up, leading to more of the wing stalling, leading to more pitch up, and so on. This problem came to be known as the Sabre dance in reference to the number of North American F-100 Super Sabres that crashed on landing as a result.
The solution to this problem took on many forms. One was the addition of a fin known as a wing fence on the upper surface of the wing to redirect the flow to the rear (see the MiG-15 as an example.) Another closely related design was addition of a dogtooth notch to the leading edge (Avro Arrow). Other designs took a more radical approach, including the Republic XF-91 Thunderceptor's wing that grew wider towards the tip to provide more lift at the tip. The Handley Page Victor had a planform based on a crescent compound sweep or scimitar wing that had substantial sweep-back near the wing root where the wing was thickest, and progressively reducing sweep along the span as the wing thickness reduced towards the tip.
Modern solutions to the problem no longer require "custom" designs such as these. The addition of leading edge slats and large compound flaps to the wings has largely resolved the issue. On fighter designs, the addition of leading edge extensions, included for high maneuverability, also serve to add lift during landing and reduce the problem.
The swept wing also has several more problems. One is that for any given length of wing, the actual span from tip-to-tip is shorter than the same wing that is not swept. Low speed drag is strongly correlated with the aspect ratio, the span compared to chord, so a swept wing always has more drag at lower speeds. Another concern is the torque applied by the wing to the fuselage, as much of the wing's lift lies behind the point where the wing root connects to the plane. Finally, while it is fairly easy to run the main spars of the wing right through the fuselage in a straight wing design to use a single continuous piece of metal, this is not possible on the swept wing because the spars will meet at an angle.
Forward sweep
Sweeping a wing forward has approximately the same effect as rearward in terms of drag reduction, but has other advantages in terms of low-speed handling where tip stall problems simply go away. In this case the low-speed air flows towards the fuselage, which acts as a very large wing fence. Additionally, wings are generally larger at the root anyway, which allows them to have better low-speed lift.
However, this arrangement also has serious stability problems. The rearmost section of the wing will stall first causing a pitch-up moment pushing the aircraft further into stall similar to a swept back wing design. Thus swept-forward wings are unstable in a fashion similar to the low-speed problems of a conventional swept wing. However unlike swept back wings, the tips on a forward swept design will stall last, maintaining roll control.
Forward-swept wings can also experience dangerous flexing effects compared to aft-swept wings that can negate the tip stall advantage if the wing is not sufficiently stiff. In aft-swept designs, when the airplane maneuvers at high load factor the wing loading and geometry twists the wing in such a way as to create washout (tip twists leading edge down). This reduces the angle of attack at the tip, thus reducing the bending moment on the wing, as well as somewhat reducing the chance of tip stall. However, the same effect on forward-swept wings produces a wash-in effect which increases the angle of attack promoting tip stall.
Small amounts of sweep do not cause serious problems, and had been used on a variety of aircraft to move the spar into a convenient location, as on the Junkers Ju 287 or HFB-320 Hansa Jet. But larger sweep suitable for high-speed aircraft, like fighters, was generally impossible until the introduction of fly by wire systems that could react quickly enough to damp out these instabilities. The Grumman X-29 was an experimental technology demonstration project designed to test the forward swept wing for enhanced maneuverability in 1984. The Su-47 Berkut is another notable example using this technology. However no highly swept-forward design has entered production.
The first aircraft with swept wings were those designed by the British designer J.W.Dunne in the first decade of the 20th century. Dunne successfully employed severely swept wings in his tailless aircraft as a means of creating positive longitudinal static stability. Historically, many low-speed aircraft have had swept wings in order to avoid problems with their center of gravity, to move the wing spar into a more convenient location, or to improve the sideways view from the pilot's position. For instance, the Douglas DC-3 had a slight sweep to the leading edge of its wing. The wing sweep in low-speed aircraft was not intended to help with transonic performance, and although most have a small amount of wing sweep they are rarely described as swept wing aircraft. The Curtiss XP-55 was the first American swept wing airplane, although it was not considered successful. The swept wing had appeared before World War I, conceived as a means of permitting the design of safe, stable, and tailless flying wings. It imposed “self-damping” inherent stability upon the flying wing, and, as a result, many flying wing gliders and some powered aircraft appeared in the interwar years.
The idea of using swept wings to reduce high-speed drag was first developed in Germany in the 1930s. At a Volta Conference meeting in 1935 in Italy, Dr. Adolf Busemann suggested the use of swept wings for supersonic flight. He noted that the airspeed over the wing was dominated by the normal component of the airflow, not the freestream velocity, so by setting the wing at an angle the forward velocity at which the shock waves would form would be higher (the same had been noted by Max Munk in 1924, although not in the context of high-speed flight). Albert Betz immediately suggested the same effect would be equally useful in the transonic. After the presentation the host of the meeting, Arturo Crocco, jokingly sketched "Busemann's airplane of the future" on the back of a menu while they all dined. Crocco's sketched showed a classic 1950's fighter design, with swept wings and tail surfaces, although he also sketched a swept propeller powering it.
Hubert Ludewieg of the High-Speed Aerodynamics Branch at the AVA Göttingen in 1939 conducted the first wind tunnel tests to investigate Busemann's theory. Two wings, one with no sweep, and one with 45 degrees of sweep were tested at Mach numbers of 0.7 and 0.9 in the 11 x 13 cm wind tunnel. The results of these tests confirmed the drag reduction offered by swept wings at transonic speeds. The results of the tests were communicated to Albert Betz who then passed them on to Willy Messerschmitt in December 1939. The tests were expanded in 1940 to include wings with 15, 30 and -45 degrees of sweep and Mach numbers as high as 1.21.
At the time, however, there was no way to power an aircraft to these sorts of speeds, and even the fastest aircraft of the era were only approaching 400 km/h (249 mph). Large engines at the front of the aircraft made it difficult to obtain a reasonable fineness ratio, and although wings could be made thin and broad, doing so made them considerably less strong. The British Supermarine Spitfire used as thin a wing as possible for lower high-speed drag, but later paid a high price for it in a number of aerodynamic problems such as control reversal. German design instead opted for thicker wings, accepting the drag for greater strength and increased internal space for landing gear, fuel and weapons.
At the time the presentation was largely of academic interest, and soon forgotten. Even notable attendees including Theodore von Kármán and Eastman Jacobs did not recall the presentation 10 years later when it was re-introduced to them. Buseman was in charge of aerodynamics research at Braunschweig, and in spite of the limited interest he began a research program studying the concept. By 1939 wind tunnel testing had demonstrated the effect was real, and practical.
With the introduction of jets in the later half of World War II applying sweep became relevant. The German jet-powered Messerschmitt Me 262 and rocket-powered Messerschmitt Me 163 suffered from compressibility effects that made them very difficult to control at high speeds. In addition, the speeds put them into the wave drag regime, and anything that could reduce this drag would increase the performance of their aircraft, notably the notoriously short flight times measured in minutes. This resulted in a crash program to introduce new swept wing designs, both for fighters as well as bombers. The Focke-Wulf Ta 183 was a swept wing fighter design with a layout very similar to that later used on the MiG-15 that was not produced before war's end.
A prototype test aircraft, the Messerschmitt Me P.1101, was built to research the tradeoffs of the design and develop general rules about what angle of sweep to use. None of the fighter or bomber designs were ready for use by the time the war ended, but the P.1101 was captured by US forces and returned to the United States, where two additional copies with US built engines carried on the research as the Bell X-5. The last jet fighter designed by Willy Messerschmitt the HA-300 had swept wings, Delta Wing in this case.
Technology impact
The Soviet Union was intrigued about the idea of swept wings on aircraft at the end of World War II in Europe, when their "captured aviation technology" counterparts to the western Allies spread out across the defeated Third Reich. Artem Mikoyan was asked by the Soviet government, principally by the government's TsAGI aviation research department, to develop a test-bed aircraft to research the swept wing idea—the result was the late 1945-flown, unusual MiG-8 Utka pusher canard layout aircraft, with its rearwards-located wings being swept back for this type of research. When applied to the jet-powered MiG-15, its maximum speed of 1,075 km/h (668 mph) outclassed the straight-winged American jets and piston-engined fighters first deployed to Korea.
von Kármán travelled to Germany near the end of the war as part of Operation Paperclip, and reached Braunschweig on May 7, discovering a number of swept wing models and a mass of technical data from the wind tunnels. One member of the US team was George S. Schairer, who was at that time working at the Boeing company. He immediately forwarded a letter to Ben Cohn at Boeing stating that they needed to investigate the concept. He also told Cohn to distribute the letter to other companies as well, although only Boeing and North American made immediate use of it.
In February 1945, NACA engineer Robert T. Jones started looking at highly swept delta wings and V shapes, and discovered the same effects as Busemann. He finished a detailed report on the concept in April, but found his work was heavily criticised by other members of NACA Langley, notably Theodore Theodorsen, who referred to it as "hocus-pocus" and demanded some "real mathematics". However, Jones had already secured some time for free-flight models under the direction of Robert Gilruth, whose reports were presented at the end of May and showed a fourfold decrease in drag at high speeds. All of this was compiled into a report published on June 21, 1945, which was sent out to the industry three weeks later. Ironically, by this point Busemann's work had already been passed around.
Boeing was in the midst of designing the Boeing B-47 Stratojet, and the initial Model 424 was a straight-wing design similar to the B-45, B-46 and B-48 it competed with. A recent design overhaul completed in June produced the Model 432, another four-engine design with the engines buried in the fuselage to reduce drag, and long-span wings that gave it an almost glider-like appearance. By September the Braunschweig data had been worked into the design, which re-emerged as the Model 448, a larger six-engine design with more robust wings swept at about 35 degrees. Another re-work in November moved the engines into strut-mounted pods under the wings since Boeing was concerned that the uncontained failure of an internal engine could potentially destroy the aircraft. With the engines mounted away from the wings on struts equipped with fuse pins, an out-of-balance engine would simply shatter the pins and fall harmlessly away, sparing the aircraft from destructive vibrations. The resulting B-47 design had performance rivaling the fastest fighters and trounced the straight-winged competition. Boeing's winning jet-transport formula of swept wings and engines mounted on pylons under the wings has since been universally adopted.
In fighters, North American Aviation was in the midst of working on a straight-wing jet-powered naval fighter then known as the FJ-1. It was submitted it to the Air Force as the XP-86. Larry Green, who could read German, studied the Busemann reports and convinced management to allow a redesign starting in August 1945. A battery of wind tunnel tests followed, and although little else of the design was changed, including the wing profile (NACA 0009), the performance of the aircraft was dramatically improved over straight-winged jets. With the appearance of the MiG-15, the F-86 was rushed into combat and straight-wing jets like the Lockheed P-80 Shooting Star and Republic F-84 Thunderjet were soon relegated to ground attack. Some such as the F-84 and Grumman F-9 Cougar were later redesigned with swept wings from straight-winged aircraft. Later planes such as the North American F-100 Super Sabre would be designed with swept wings from the start, though additional innovations such as the afterburner, area-rule and new control surfaces would be necessary to master supersonic flight.
The British also received the German data, and decided that future high-speed designs would have to use it. A particularly interesting victim of this process was the cancellation of the Miles M-52, a straight-wing design for an attempt on the speed of sound. When the swept wing design came to light the project was cancelled, as it was thought it would have too much drag to break the sound barrier, but soon after the US nevertheless did just that with the Bell X-1. The Air Ministry introduced a program of experimental aircraft to examine the effects of swept wings (as well as delta wings) and introduced their first combat designs as the Hawker Hunter and Supermarine Swift.
The German research was also "leaked" to SAAB from a source in Switzerland in late 1945. They were in the process of developing the jet fighter Saab 29 Tunnan, and quickly adapted the existing straight-wing layout to incorporate a 25 degree sweep. Although not well known outside Sweden, the Tunnan was a very competitive design, remaining in service until 1972 in some roles.
The introduction of the German swept wing research to aeronautics caused a minor revolution, especially after the dramatic successes of the B-47 and F-86. Eventually almost all design efforts immediately underwent modifications in order to incorporate a swept wing. The classic Boeing B-52, designed in the 1950s, would remain in service until into the 21st century as a high subsonic long-range heavy bomber despite the development of the triple-sonic North American B-70 Valkyrie, supersonic swing-wing Rockwell B-1 Lancer, and flying wing designs. While the Soviet never matched the performance of the Boeing B-52 Stratofortress with a jet design, the intercontinental range Tupolev Tu-95 turboprop bomber also remains in service today. With a near-jet class top speed of 920 km/h, it is an unusual in combining swept wings with propeller propulsion and remains the fastest propeller powered production aircraft. By the 1960s, most civilian jets such as the Boeing 707 adopted swept wings as well.
By the early 1950s nearly every new fighter was either rebuilt or designed from scratch with a swept wing. The Douglas A-4 Skyhawk and Douglas F4D Skyray were examples of delta wings which also have swept leading edges with or without a tail. Most early transonic and supersonic designs such as the MiG-19 and F-100 used long, highly swept wings. Swept wings would reach Mach 2 in the arrow-winged BAC Lightning, and stubby winged Republic F-105 Thunderchief, which was found to be wanting in turning ability in Vietnam combat. By the late 1960s, the F-4 Phantom and Mikoyan-Gurevich MiG-21 which both used variants on tailed delta wings came to dominate front line air forces. Variable geometry wings were employed on the American F-111, Grumman F-14 Tomcat and Soviet Mikoyan Mig-27, although the idea would be abandoned for the American SST design. After the 1970s, most newer generation fighters optimized for maneuvering air combat since the USAF F-15 and Soviet Mikoyan MiG-29 have employed relatively short-span fixed wings with relatively large wing area.
Sweep theory
Sweep theory is an aeronautical engineering description of the behavior of airflow over a wing when the wing's leading edge encounters the airflow at an oblique angle. The development of sweep theory resulted in the swept wing design used by most modern jet aircraft, as this design performs more effectively at transonic and supersonic speeds. In its advanced form, sweep theory led to the experimental oblique wing concept.
Adolf Busemann introduced the concept of the swept wing and presented this 1935 at the 5. Volta-Congress in Rome. Sweep theory in general was a subject of development and investigation throughout the 1930s and 1940s, but the breakthrough mathematical definition of sweep theory is generally credited to NACA's Robert T. Jones in 1945. Sweep theory builds on other wing lift theories. Lifting line theory describes lift generated by a straight wing (a wing in which the leading edge is perpendicular to the airflow). Weissinger theory describes the distribution of lift for a swept wing, but does not have the capability to include chordwise pressure distribution. There are other methods that do describe chordwise distributions, but they have other limitations. Jones' sweep theory provides a simple, comprehensive analysis of swept wing performance.
To visualize the basic concept of simple sweep theory, consider a straight, non-swept wing of infinite length, which meets the airflow at a perpendicular angle. The resulting air pressure distribution is equivalent to the length of the wing's chord (the distance from the leading edge to the trailing edge). If we were to begin to slide the wing sideways (spanwise), the sideways motion of the wing relative to the air would be added to the previously perpendicular airflow, resulting in an airflow over the wing at an angle to the leading edge. This angle results in airflow traveling a greater distance from leading edge to trailing edge, and thus the air pressure is distributed over a greater distance (and consequently lessened at any particular point on the surface).
This scenario is identical to the airflow experienced by a swept wing as it travels through the air. The airflow over a swept wing encounters the wing at an angle. That angle can be broken down into two vectors, one perpendicular to the wing, and one parallel to the wing. The flow parallel to the wing has no effect on it, and since the perpendicular vector is shorter (meaning slower) than the actual airflow, it consequently exerts less pressure on the wing. In other words, the wing experiences airflow that is slower - and at lower pressures - than the actual speed of the aircraft.
One of the factors that must be taken into account when designing a high-speed wing is compressibility, which is the effect that acts upon a wing as it approaches and passes through the speed of sound. The significant negative effects of compressibility made it a prime issue with aeronautical engineers. Sweep theory helps mitigate the effects of compressibility in transonic and supersonic aircraft because of the reduced pressures. This allows the mach number of an aircraft to be higher than that actually experienced by the wing.
There is also a negative aspect to sweep theory. The lift produced by a wing is directly related to the speed of the air over the wing. Since the airflow speed experienced by a swept wing is lower than what the actual aircraft speed is, this becomes a problem during slow-flight phases, such as takeoff and landing. There have been various ways of addressing the problem, including the variable-incidence wing design on the F-8 Crusader and swing wings on aircraft such as the F-14, F-111, and the Panavia Tornado.
- Sears, William Rees, Stories form a 20th-Century Life", Parabolic Press, Inc., Stanford California, 1994.
- Meier, Hans-Ulrich, editor German Development of the Swept Wing 1935-1945, AIAA Library of Flight, 2010. Originally published in German as Die deutsche Luftahrt Die Pfeilflügelentwicklung in Deutschland bis 1945, Bernard & Graefe Verlag, 2006.
- Shevell, Richard, "Aerodynamic Design Features", DC-8 design summary, February 22, 1957.
- Dunn, Orville R., "Flight Characteristics of the DC-8", SAE paper 237A, presented at the SAE National Aeronautic Meeting, Los Angeles California, October 1960.
- Cook, William H. The Road to the 707: The Inside Story of Designing the 707. Bellevue, Washington: TYC Publishing, 1991. ISBN 0-9629605.
- "Supersonic Wing Designs." selkirk.bc.ca. Retrieved: August 1, 2011.
- "Supersonic Wing design: The Mach cone becomes increasingly swept back with increasing Mach numbers." Centennial of Flight Commission, 2003. Retrieved: August 1, 2011.
- Haack, Wolfgang. "Heinzerling, Supersonic Area Rule" (in German), p. 39. bwl.tu-darmstadt.de.
- "Forward swept wings." Homebuiltairplanes. Retrieved: August 1, 2011.
- Poulsen, C. M. "Tailless Trials." Flight, May 27, 1943, pp. 556–558. Retrieved: August 1, 2011.
- Hallion, Richard, P. "The NACA, NASA, and the Supersonic-Hypersonic Frontier". NASA. NASA Technical Reports Server. Retrieved 7 September 2011.
- Anderson, John D. Jr. A History of Aerodynamics. New York: McGraw Hill, 1997, p. 424.
- "Comment by Hans von Ohain during public talks with Frank Whittle, p. 28." ascho.wpafb.af.mil. Retrieved: August 1, 2011.
- "Wing Planforms for High-Speed Flight." NACA TN-1033. Retrieved: July 24, 2011.
- Goebel, Greg. "The SAAB 29 Tunnan." Air Vectors. Retrieved: August 1, 2011.
Further reading
- "The High-speed Shape: Pitch-up and palliatives adopted on swept-wing aircraft", Flight International, 2 January 1964
- Swept Wings and Effective Dihedral
- The development of swept wings
- Simple sweep theory math
- Advanced math of swept and oblique wings
- The L-39 and swept wing research
- Sweep theory in a 3D environment
- CFD results showing the 3 dimensional supersonic bubble over the wing of an A 320. Another CFD result showing the MDXX and how the shock vanishes close to the fuselage where the aerofoil is more slender | http://en.wikipedia.org/wiki/Swept_wing | 13 |
52 | This HTML version of is provided for convenience, but it is not the best format for the book. In particular, some of the symbols are not rendered correctly.
Chapter 7 Hypothesis testing
Exploring the data from the NSFG, we saw several “apparent effects,” including a number of differences between first babies and others. So far we have taken these effects at face value; in this chapter, finally, we put them to the test.
The fundamental question we want to address is whether these effects are real. For example, if we see a difference in the mean pregnancy length for first babies and others, we want to know whether that difference is real, or whether it occurred by chance.
That question turns out to be hard to address directly, so we will proceed in two steps. First we will test whether the effect is significant, then we will try to interpret the result as an answer to the original question.
In the context of statistics, “significant” has a technical definition that is different from its use in common language. As defined earlier, an apparent effect is statistically significant if it is unlikely to have occurred by chance.
To make this more precise, we have to answer three questions:
All three of these questions are harder than they look. Nevertheless, there is a general structure that people use to test statistical significance:
This process is called hypothesis testing. The underlying logic is similar to a proof by contradiction. To prove a mathematical statement, A, you assume temporarily that A is false. If that assumption leads to a contradiction, you conclude that A must actually be true.
Similarly, to test a hypothesis like, “This effect is real,” we assume, temporarily, that is is not. That’s the null hypothesis. Based on that assumption, we compute the probability of the apparent effect. That’s the p-value. If the p-value is low enough, we conclude that the null hypothesis is unlikely to be true.
7.1 Testing a difference in means
One of the easiest hypotheses to test is an apparent difference in mean between two groups. In the NSFG data, we saw that the mean pregnancy length for first babies is slightly longer, and the mean weight at birth is slightly smaller. Now we will see if those effects are significant.
To compute p-values, we find the pooled distribution for all live births (first babies and others), generate random samples that are the same size as the observed samples, and compute the difference in means under the null hypothesis.
If we generate a large number of samples, we can count how often the difference in means (due to chance) is as big or bigger than the difference we actually observed. This fraction is the p-value.
For pregnancy length, we observed n = 4413 first babies and m = 4735 others, and the difference in mean was δ = 0.078 weeks. To approximate the p-value of this effect, I pooled the distributions, generated samples with sizes n and m and computed the difference in mean.
This is another example of resampling, because we are drawing a random sample from a dataset that is, itself, a sample of the general population. I computed differences for 1000 sample pairs; Figure 7.1 shows their distribution.
The mean difference is near 0, as you would expect with samples from the same distribution. The vertical lines show the cutoffs where x = −δ or x = δ.
Of 1000 sample pairs, there were 166 where the difference in mean (positive or negative) was as big or bigger than δ, so the p-value is approximately 0.166. In other words, we expect to see an effect as big as δ about 17% of the time, even if the actual distribution for the two groups is the same.
So the apparent effect is not very likely, but is it unlikely enough? I’ll address that in the next section.
Exercise 1 In the NSFG dataset, the difference in mean weight for first births is 2.0 ounces. Compute the p-value of this difference.
Hint: for this kind of resampling it is important to sample with replacement, so you should use random.choice rather than random.sample (see Section 3.8).
7.2 Choosing a threshold
The most common approach to hypothesis testing is to choose a threshold1, α, for the p-value and to accept as significant any effect with a p-value less than α. A common choice for α is 5%. By this criterion, the apparent difference in pregnancy length for first babies is not significant, but the difference in weight is.
For this kind of hypothesis testing, we can compute the probability of a false positive explicitly: it turns out to be α.
To see why, think about the definition of false positive—the chance of accepting a hypothesis that is false—and the definition of a p-value—the chance of generating the measured effect if the hypothesis is false.
Putting these together, we can ask: if the hypothesis is false, what is the chance of generating a measured effect that will be considered significant with threshold α? The answer is α.
We can decrease the chance of a false positive by decreasing the threshold. For example, if the threshold is 1%, there is only a 1% chance of a false positive.
But there is a price to pay: decreasing the threshold raises the standard of evidence, which increases the chance of rejecting a valid hypothesis.
Exercise 2 To investigate the effect of sample size on p-value, see what happens if you discard half of the data from the NSFG. Hint: use random.sample. What if you discard three-quarters of the data, and so on?
What is the smallest sample size where the difference in mean birth weight is still significant with α = 5%? How much larger does the sample size have to be with α = 1%?
7.3 Defining the effect
When something unusual happens, people often say something like, “Wow! What were the chances of that?” This question makes sense because we have an intuitive sense that some things are more likely than others. But this intuition doesn’t always hold up to scrutiny.
For example, suppose I toss a coin 10 times, and after each toss I write down H for heads and T for tails. If the result was a sequence like THHTHTTTHH, you wouldn’t be too surprised. But if the result was HHHHHHHHHH, you would say something like, “Wow! What were the chances of that?”
But in this example, the probability of the two sequences is the same: one in 1024. And the same is true for any other sequence. So when we ask, “What were the chances of that,” we have to be careful about what we mean by “that.”
For the NSFG data, I defined the effect as “a difference in mean (positive or negative) as big or bigger than δ.” By making this choice, I decided to evaluate the magnitude of the difference, ignoring the sign.
A test like that is called two-sided, because we consider both sides (positive and negative) in the distribution from Figure 7.1. By using a two-sided test we are testing the hypothesis that there is a significant difference between the distributions, without specifying the sign of the difference.
The alternative is to use a one-sided test, which asks whether the mean for first babies is significantly higher than the mean for others. Because the hypothesis is more specific, the p-value is lower—in this case it is roughly half.
7.4 Interpreting the result
At the beginning of this chapter I said that the question we want to address is whether an apparent effect is real. We started by defining the null hypothesis, denoted H0, which is the hypothesis that the effect is not real. Then we defined the p-value, which is P(E|H0), where E is an effect as big as or bigger than the apparent effect. Then we computed p-values and compared them to a threshold, α.
That’s a useful step, but it doesn’t answer the original question, which is whether the effect is real. There are several ways to interpret the result of a hypothesis test:
To compute P(E|HA), we assume that the effect is real—that is, that the difference in mean duration, δ, is actually what we observed, 0.078. (This way of formulating HA is a little bit bogus. I will explain and fix the problem in the next section.)
So if the prior probability of HA is 50%, the updated probability, taking into account the evidence from this dataset, is almost 75%. It makes sense that the posterior is higher, since the data provide some support for the hypothesis. But it might seem surprising that the difference is so large, especially since we found that the difference in means was not statistically significant.
In fact, the method I used in this section is not quite right, and it tends to overstate the impact of the evidence. In the next section we will correct this tendency.
Exercise 3 Using the data from the NSFG, what is the posterior probability that the distribution of birth weights is different for first babies and others?
In the previous example, we used the dataset to formulate the hypothesis HA, and then we used the same dataset to test it. That’s not a good idea; it is too easy to generate misleading results.
The problem is that even when the null hypothesis is true, there is likely to be some difference, δ, between any two groups, just by chance. If we use the observed value of δ to formulate the hypothesis, P(HA|E) is likely to be high even when HA is false.
We can address this problem with cross-validation, which uses one dataset to compute δ and a different dataset to evaluate HA. The first dataset is called the training set; the second is called the testing set.
In a study like the NSFG, which studies a different cohort in each cycle, we can use one cycle for training and another for testing. Or we can partition the data into subsets (at random), then use one for training and one for testing.
I implemented the second approach, dividing the Cycle 6 data roughly in half. I ran the test several times with different random partitions. The average posterior probability was P(HA|E) = 0.621. As expected, the impact of the evidence is smaller, partly because of the smaller sample size in the test set, and also because we are no longer using the same data for training and testing.
7.6 Reporting Bayesian probabilities
In the previous section we chose the prior probability P(HA) = 0.5. If we have a set of hypotheses and no reason to think one is more likely than another, it is common to assign each the same probability.
Some people object to Bayesian probabilities because they depend on prior probabilities, and people might not agree on the right priors. For people who expect scientific results to be objective and universal, this property is deeply unsettling.
One response to this objection is that, in practice, strong evidence tends to swamp the effect of the prior, so people who start with different priors will converge toward the same posterior probability.
Another option is to report just the likelihood ratio, P(E | HA) / P(E|H0), rather than the posterior probability. That way readers can plug in whatever prior they like and compute their own posteriors (no pun intended). The likelihood ratio is sometimes called a Bayes factor (see http://wikipedia.org/wiki/Bayes_factor).
Exercise 4 If your prior probability for a hypothesis, HA, is 0.3 and new evidence becomes available that yields a likelihood ratio of 3 relative to the null hypothesis, H0, what is your posterior probability for HA?
Exercise 5 This exercise is adapted from MacKay, Information Theory, Inference, and Learning Algorithms:
7.7 Chi-square test
In Section 7.2 we concluded that the apparent difference in mean pregnancy length for first babies and others was not significant. But in Section 2.10, when we computed relative risk, we saw that first babies are more likely to be early, less likely to be on time, and more likely to be late.
So maybe the distributions have the same mean and different variance. We could test the significance of the difference in variance, but variances are less robust than means, and hypothesis tests for variance often behave badly.
An alternative is to test a hypothesis that more directly reflects the effect as it appears; that is, the hypothesis that first babies are more likely to be early, less likely to be on time, and more likely to be late.
We proceed in five easy steps:
Using the data from the NSFG I computed χ2 = 91.64, which would occur by chance about one time in 10,000. I conclude that this result is statistically significant, with one caution: again we used the same dataset for exploration and testing. It would be a good idea to confirm this result with another dataset.
Exercise 6 Suppose you run a casino and you suspect that a customer has replaced a die provided by the casino with a “crooked die;” that is, one that has been tampered with to make one of the faces more likely to come up than the others. You apprehend the alleged cheater and confiscate the die, but now you have to prove that it is crooked.
You roll the die 60 times and get the following results:
What is the chi-squared statistic for these values? What is the probability of seeing a chi-squared value as large by chance?
7.8 Efficient resampling
Anyone reading this book who has prior training in statistics probably laughed when they saw Figure 7.1, because I used a lot of computer power to simulate something I could have figured out analytically.
Obviously mathematical analysis is not the focus of this book. I am willing to use computers to do things the “dumb” way, because I think it is easier for beginners to understand simulations, and easier to demonstrate that they are correct. So as long as the simulations don’t take too long to run, I don’t feel guilty for skipping the analysis.
However, there are times when a little analysis can save a lot of computing, and Figure 7.1 is one of those times.
Remember that we were testing the observed difference in the mean between pregnancy lengths for n = 4413 first babies and m = 4735 others. We formed the pooled distribution for all babies, drew samples with sizes n and m, and computed the difference in sample means.
Instead, we could directly compute the distribution of the difference in sample means. To get started, let’s think about what a sample mean is: we draw n samples from a distribution, add them up, and divide by n. If the distribution has mean µ and variance σ2, then by the Central Limit Theorem, we know that the sum of the samples is N(nµ, nσ2).
To figure out the distribution of the sample means, we have to invoke one of the properties of the normal distribution: if X is N(µ, σ2),
aX + b ∼ N(aµ + b, a2 σ2)
When we divide by n, a = 1/nand b = 0, so
X/n ∼ N(µ/n, σ2/ n2)
So the distribution of the sample mean is N(µ, σ2/n).
To get the distribution of the difference between two sample means, we invoke another property of the normal distribution: if X1 is N(µ1, σ12) and X2 is N(µ2, σ22),
So as a special case:
Putting it all together, we conclude that the sample in Figure 7.1 is drawn from N(0, fσ2), where f = 1/n + 1/m. Plugging in n = 4413 and m = 4735, we expect the difference of sample means to be N(0, 0.0032).
We can use erf.NormalCdf to compute the p-value of the observed difference in the means:
delta = 0.078 sigma = math.sqrt(0.0032) left = erf.NormalCdf(-delta, 0.0, sigma) right = 1 - erf.NormalCdf(delta, 0.0, sigma)
The sum of the left and right tails is the p-value, 0.168, which is pretty close to what we estimated by resampling, 0.166. You can download the code I used in this section from http://thinkstats.com/hypothesis_analytic.py
When the result of a hypothesis test is negative (that is, the effect is not statistically significant), can we conclude that the effect is not real? That depends on the power of the test.
Statistical power is the probability that the test will be positive if the null hypothesis is false. In general, the power of a test depends on the sample size, the magnitude of the effect, and the threshold α.
Exercise 7 What is the power of the test in Section 7.2, using α = 0.05 and assuming that the actual difference between the means is 0.078 weeks?
You can estimate power by generating random samples from distributions with the given difference in the mean, testing the observed difference in the mean, and counting the number of positive tests.
What is the power of the test with α = 0.10?
One way to report the power of a test, along with a negative result, is to say something like, “If the apparent effect were as large as x, this test would reject the null hypothesis with probability p.”
Like this book?
Are you using one of our books in a class?We'd like to know about it. Please consider filling out this short survey. | http://www.greenteapress.com/thinkstats/html/thinkstats008.html | 13 |
68 | Powered by Max Banner Ads
By Stephen Nelson
You use Excel’s linear regression functions to find a linear equation that best describes a data set.
Excel uses the sum of least squares method to find the straight line of best fit. People often try to predict future amounts by assuming linear growth and extending the line forward in time. For example, if you have a series of sales data for 9 months and want to predict the sales in the 10th month, you can use Excel’s linear regression functions to find the slope and y-intercept (the point on the y-axis where the line crosses) of the line that best fits the data.
Background Info on Linear Regression
To use the linear regression functions, it helps to remember the equation for a line:
where y is the dependent variable, m the slope, x the independent variable, and b the
y-intercept. If there are multiple ranges of x values, the equation looks like this:
NOTE To visualize and experiment with linear regression, visit the interactive web page at
http://www.math.csusb.edu/faculty/stanton/m262/regress/regress.html. Click the
graph area to add data points (x,y) to the graph. The applet draws the straight line
that best fits the points you add, adjusting the line for the new data points you add.
Using the FORECAST Function
The FORECAST function predicts a future y-value for the x-value you specify using existing
x and y values. The FORECAST function uses the following syntax:
=FORECAST(x, known ys, known xs)
where x is the x-value for which you want to predict a y-value.
Using the INTERCEPT Function
If you have existing x and y values, Excel can find the straight line that best fits the data and then calculate the point at which the line intersects the y-axis, in other words, the value of b in the “y=mx+b” equation. The y-intercept is useful when you want to know the value of the dependent variable when the independent variable equals 0.
NOTE: The INTERCEPT function returns the same value as the FORECAST function if you enter 0 for x in the FORECAST function.
The INTERCEPT function uses the following syntax:
=INTERCEPT (known ys, known xs)
Using the LINEST Function
The LINEST function returns the value of m and b given at least one set of known ys and known xs. The LINEST function has the following syntax:
=LINEST (known ys, known xs, constant, statistics)
where known ys is the array of y values you already know, known xs is the array of x values you may already know. If you leave out the known xs, they are assumed to be 1, 2, 3,…n. If constant is set to FALSE, b is assumed to be 0. If statistics is set to TRUE, the LINEST function also returns the standard error for each data point.
NOTE: If the known ys are in a single column or row, then Excel considers each column of
known xs to be a separate variable.
NOTE: The array known xs can include multiple sets of variables. If you use only one set, then known ys and known xs can be ranges of any shape, as long as they have equal dimensions. If you use more than one variable, then the known ys array must be either a single column or a single row. If you don’t enter known xs, Excel assumes this array is the same size as the known ys array.
Using the SLOPE Function
Use the SLOPE function to find the slope (m) of the linear regression line from the known x and known y data sets. The slope is the change in y over the change in x for any two points on the line. The SLOPE function in Excel uses the following syntax:
=SLOPE (known ys, known xs)
A positive (upwards) slope means that the independent variable (such as the number of salespeople) has a positive effect on a dependent variable (such as sales). A negative (downwards) slope means that the independent variable has a negative effect on the dependent variable. The steeper the slope, the more effect the independent variable has on the dependent variable.
Using the STEYX Function
Use the STEYX function to find the standard error of the predicted y-value for each individual x in the regression. The STEYX function uses the following syntax:
=STEYX (known ys, known xs)
Using the TREND Function
Use the TREND function to find values along a linear trend. Specify an array of new xs and the TREND function uses the method of least squares to fit a straight line to the known x and y data sets and return the y-values along the line for the new array. If constant is set to FALSE, the “b” in the y=mx+b equation is set to zero. The TREND function uses the following syntax:
=TREND (known ys, known xs, new xs, constant)
About the author: Seattle CPA Stephen L. Nelson wrote the bestselling book, MBA’s Guide to Microsoft Excel, from which this short article is adapted. Nelson also writes and edits the S Corporations Explained and LLCs Explained websites.
Want to read more about MS Excel tips and tutorials? Visit Hot Excel. | http://www.technicalcommunicationcenter.com/2009/04/30/how-to-perform-linear-regression-analysis-with-microsoft-excel/ | 13 |
65 | Lavoisier's successes stimulated chemists to search out and explore other areas in which accurate measurements might illuminate the study of chemical reactions. The acids comprised one such area.
Acids form a natural group sharing a number of properties. They tend to be chemically active, reacting with metals such as zinc, tin, or iron, dissolving them and producing hydrogen. They taste sour (if dilute enough or weak enough to be tasted with impunity), cause certain dyes to change colors in certain ways, and so on.
Opposed to the acids is another group of substances called bases. (Strong bases are termed alkalis). These are also chemically active, taste bitter, change dye colors in a fashion opposite to that induced by acids, and so on. In particular, solutions of acids will neutralize solutions of bases. In other words, if acids and bases are mixed in proper proportions, then the mixture will show the property of neither acids nor bases. The mixture will be, instead a solution of a salt, which, in general, is a much milder chemical than either an acid or a base. Thus, a solution of the strong and caustic acid, hydrochloric acid, if mixed with the proper amount of the strong and caustic alkali, sodium hydroxide, will become a solution of sodium chloride, ordinary table salt.
The German chemist Jeremias Benjamin Richter (1762-1807) turned his attention to these neutralization reactions, and measured the exact amounts of different acids that were required to neutralize a given quantity of a particular base, and vice versa. By careful measurements he found that fixed and definite amounts were required. There wasn't the leeway that a cook might count on in the kitchen, where a bit more or less of some ingredient is not terribly important. Instead, there was such a thing as an equivalent weight; a fixed weight of one chemical reacted with a fixed weight of another chemical. Richter published his work in 1792.
Two French chemists were then engaged in strenuous battle over whether this sort of definiteness existed not only in acid-base neutralization but throughout chemistry. To put it fundamentally, if a particular compound were made up of two elements (or three or four), were those two elements (or three or four) always present in this compound in the same, fixed proportions? Or would these proportions vary, depending on the exact method of preparing the compound? Berthollet, one of those who collaborated with Lavoisier in establishing modern chemical terminology, thought the latter. According to Berthollet's view, if a compound consisted of element x and y, then it would contain a more then average quantity of x, if it were prepared while using x in large excess.
Opposed to Berthollet's view was the opinion of Joseph Louis Proust (1754-1826), who did his work in Spain, safe (for a time) from the upheavals of the French Revolution. Using painstakingly careful analysis, Proust showed, in 1799, that copper carbonate, for instance, contained definite proportions by weight of copper, carbon, and oxygen, no matter how it was prepared in the laboratory or how it was isolated from natural sources. The preparation was always 5.3 parts of copper to 4 of oxygen to 1 of carbon.
Proust went on to show that a similar situation was true for a number of other compounds, and formulated the generalization that all compounds contained elements in certain definite proportions and in no other combinations, regardless of the conditions under which they were produces. This is called the law of definite proportion or, sometimes, Proust's Law. (Proust also showed that Berthollet, in presenting evidence that certain compounds varied in composition according to the method of preparation, was misled through inaccurate analysis and through the use of products he had insufficiently purified).
During the first few years of the nineteenth century, it became quite clear that Proust was right. Other chemists verified the law of definite proportions, and it became a cornerstone of chemistry. (It is true that some substances can vary, within limits, in their elemental constitution. These are special cases. The simple compounds which engaged the attention of the chemists of 1800 held firmly to the law of definite proportions).
From the moment Proust's law was announced, serious thoughts concerning it were forced into the chemical view.
After all, why should the law of definite proportions hold true? Why should a certain compound be made up always of 4 parts x and 1 part y, let us say, and never of 4.1 parts x or 3.9 parts x to 1 part y. If matter were continuous, this would be hard to understand. Why could not elements be mixed in slightly varying proportions?
But what if matter was atomistic in nature? Suppose a compound was formed when one atom of x joined with one atom of y and not otherwise. (Such a combination of atoms eventually came to be called a molecule, from a Latin word meaning "a small mass"). Suppose, next, that each atom of x happened to weigh four times as much as each atom of y. The compound would then have to consist of exactly 4 parts of x to 1 part of y.
In order to vary those proportions, an atom of y would have to be united with slightly more or slightly less than one atom of x. Since an atom, ever since the time of Democritus, had been viewed as being an indivisible portion of matter, it was unreasonable to expect that a small piece might be chipped off an atom, or that a sliver of a second atom might be added to it.
In other words, if matter consisted of atoms, then the law of definite proportions followed as a natural consequence. Furthermore, from the fact that the law of definite proportions was an observed fact, one could deduce that atoms were indeed indivisible objects.
An English chemist, John Dalton (1766-1844), went through this chain of reasoning. In this, he was greatly aided by a discovery he made. Two elements, he found, might, after all, combine in more than one set of proportions, but in so doing they exhibited a wide variation of combining proportions and different compound was formed for each variation.
As a simple example, consider the elements carbon and oxygen. Measurement shows that 3 parts of carbon (by weight) will combine with 8 parts of oxygen to form carbon dioxide. However, 3 parts of carbon and 4 parts of oxygen make up carbon monoxide. In such a case, the differing quantities of oxygen that combine with a fixed amount of carbon are found to be related in the form of small whole numbers. The 8 parts present in carbon dioxide is exactly twice that of the 4 parts present in carbon monoxide.
This is the law of multiple proportions. Dalton, after observing its existence in a number of reactions, advanced it in 1803.
The law of multiple proportions fits in neatly with atomistic notions. Suppose, for instance, that atoms of oxygen are uniformly 1-1/3 times as heavy as atoms of carbon. If carbon monoxide is formed through the combination of one atom of carbon with one atom of oxygen, the compound must consist of 3 parts by weight of carbon to 4 parts of oxygen.
Then, if carbon dioxide is formed of one atom of carbon and two atoms of oxygen, the proportion must naturally consist of 3 parts of carbon to 8 of oxygen.
The relationship in simple multiples would reflect the existence of compounds varying in makeup by whole atoms. Surely, if matter did indeed consist of tiny, indivisible atoms, these would be just the variations in makeup you would expect to find, and the law of multiple proportions makes sense.
When Dalton put forward his new version of the atomic theory based on the laws of definite proportions and of multiple proportion, in 1803, he acknowledged the debt to Democritus by keeping the term "atom" for the small particles that made up matter.
In 1808, he published A New System of Chemical Philosophy, in which his atomic theory was discussed in greater detail. In that year, too, his law of multiple proportions was verified by the investigations of another English chemist, William Hyde Wollaston (1766-1828). Wollaston lent his influential weight to the atomic theory in consequence, and Dalton's view in due course won general acceptance.
The atomic theory, by the way, was a death blow (if any were needed) to belief in the possibility of transmutation of alchemical terms. All evidence seemed to point to the possibility that the different metals each consisted of a separate type of atom. Since atoms were taken generally to be indivisible and unchangeable, one could not expect to change a lead atom to a gold atom in any circumstances. Lead, therefore, could not be transmuted to gold. (A century after Dalton's time this view had to be modified. One atom could, after all, be changed to another. The methods used to achieve this, however, were such as no alchemist ever imagined or could have performed.)
Dalton's atoms were, of course, far too small to be seen even under a microscope; direct observation was out of the question. Indirect measurements, however, could yield information as to their relative weights.
For instance, 1 part (by weight) of hydrogen combined with 8 parts of oxygen to form water. If one assumed that a molecule of water consisted of one atom of hydrogen and one atom of oxygen, then it would follow that the oxygen atom was eight times as heavy as the hydrogen atom. If it was decided to set the weight of the hydrogen atom arbitrarily equal to 1, then the weight of the oxygen atom on that scale would be 8.
Again, if 1 part of hydrogen combines with 5 parts of nitrogen in forming ammonia, and it is assumed that the ammonia molecule is made up of one atom of hydrogen and on of nitrogen, it would follow that the nitrogen atom would have a weight of 5.
Reasoning after this fashion, Dalton set up the first table of atomic weights. This table, although perhaps his most important single contribution, proved to be quite wrong in many entries. The chief flaw lay in Dalton's insistence that in general molecules were formed by the pairing of a single atom of one element with a single atom of another. He varied from this position only when absolutely necessary.
Evidence piled up, however, that indicated such a one-to-one combination was not necessarily the rule at all. The disagreement showed up in connection with water, in particular, even before Dalton had advanced his atomic theory.
Here, for the first time, the force of electricity invades the world of chemistry.
Knowledge of electricity dates back to the ancient Greeks, who found that when amber is rubbed, it gains the power to attract light objects.
Centuries later, the English physicist William Gilbert (1540-1603) was able to show that it was not amber alone that acted so, but that a number of other substances as well gained an attracting power when rubbed. About 1600, he suggested that substances of this sort be called "electrics", from the Greek word for amber.
As a result, a substance that gains such a power, through rubbing or otherwise, is said to carry an electric charge, or to contain electricity.
The French chemist Charles Francois de Cisternay du Fay (1698-1739) discovered, in 1733, that there were two kinds of electric charge: one that could be put on glass ("vitreous electricity") and one that could be put on amber ("resinous electricity"). A substance carrying one kind of charge attracted another substance carrying the other, but two substances bearing the same kind of charge repelled each other.
Benjamin Franklin (1706-1790), who was the first great American scientist as well as a great statesman and diplomat, suggested, in the 1740's, that there was a single electrical fluid. When a substance contained a greater than normal quantity of electric fluid, it possessed one kind of electric charge; when it contained a less than normal quantity, it possessed the other kind.
Franklin guessed it was the glass that contained the greater than normal quantity of electric fluid, so he said it carried a positive charge. The resin, he said, carried a negative charge. Franklin's terms have been used ever since, although the usage leads to a concept of current flow opposite to what now is known to be the fact.
The Italian physicist Alessandro Volta (1745-1827) introduced something new. He found, in 1800, that two metals (separated by solutions capable of conducting an electric charge) could be so arranged that new charge was created as fast as the old charge was carried off along a conducting wire. He had invented the first electric battery and produced an electric current.
Such an electric current is maintained by the chemical reaction involving the two metals and the solution between. Volta's work gave the first clear indication that chemical reactions had something to do with electricity, a suggestion that was not to be developed completely for another century. If a chemical reaction could produce an electric current, it did not seem to be too farfetched to suppose that an electric current could reverse matters and produce a chemical reaction.
Indeed, within six weeks of Volta's first description of his work, two English chemists, William Nicholson (1753-1815) and Anthony Carlisle (1768-1840), demonstrated the reverse action. They ran an electric current through water and found bubbles of gas began to appear at the electricity-conducting strips of metal which they had inserted in the water. The gas appearing at one strip was hydrogen and that appearing at the other was oxygen.
In effect, Nicholson and Carlisle had decomposed water into hydrogen and oxygen, such decomposition by an electric current being called electrolysis. They had achieved the reverse of Cavendish's experiment, in which hydrogen and oxygen had been combined to form water.
When the hydrogen and oxygen were trapped in separate vessels as they bubbled off, it turned out that just twice as large a volume of hydrogen was formed as of oxygen. The hydrogen was the lighter in weight, to be sure, but the larger volume indicate that there might be more atoms of hydrogen than of oxygen in the water molecule.
Since there was just twice as large a volume of hydrogen produced as of oxygen, there was at least a certain reasonableness in supposing that each molecule of water contained two atoms of hydrogen and one of oxygen, rather than on of each, as Dalton proposed.
Even if this were so, it remained true that 1 part of hydrogen (by weight) was combined with 8 parts of oxygen. It followed, then, that one oxygen atom was eight times as heavy as two hydrogen atoms taken together, and, therefore, sixteen times as heavy as a single hydrogen atom. If the weight of hydrogen is set at 1, then, the atomic weight of oxygen must be 16, not 8.
The findings of Nicholson and Carlisle were strengthened by the work of a French chemist, Joseph Louis Gay-Lussac (1778-1850), who reversed matters. He discovered that 2 volumes of hydrogen combined with 1 volume of oxygen to form water. He went on to find, in fact, that when gases combine to form compounds, they always did so in small whole number ratios. Gay-Lussac announced this law of combining volumes in 1808.
From the whole numbers ratios in the formation of water from hydrogen and oxygen, it again seemed reasonable to suppose that the water molecule was composed of two atoms of hydrogen and one of oxygen. It could also be argued from similar lines of evidence that the ammonia molecule did not consist of a combination of one nitrogen atom and one hydrogen atom, but of one nitrogen atom and three hydrogen atoms. From that evidence one could conclude that the atomic weight of nitrogen was not nearly 5, but was 14.
Consider hydrogen and chlorine next. These are gases which combine to form a third gas, hydrogen chloride. One volume of hydrogen combines with one volume of chlorine, and it seems reasonable to suppose that the hydrogen chloride molecule is made up of one hydrogen atom combined with one chlorine atom.
Suppose, now, that the hydrogen gas consists of single hydrogen atoms, spaced widely apart, and the chlorine gas consists of single chlorine atoms, spaced equally widely apart. These atoms pair up to form hydrogen chloride molecules, also spaced equally widely apart.
We begin, let us say, with 100 atoms of hydrogen and 100 atoms of chlorine, giving 200 widely spaced particles all told. The atoms pair up to form 100 molecules of hydrogen chloride. The 200 widely spaced particles (atoms) become only 100 widely spaced particles (molecules). If the spacing is equal throughout, we should find that 1 volume of hydrogen plus 1 volume of chlorine (2 volumes, altogether) should yield only 1 volume of hydrogen chloride. This, however, is not so.
By actual measurement, 1 volume of hydrogen combines with 1 volume of chlorine to form 2 volumes of hydrogen chloride. Since 2 volumes to start with remain 2 volumes to end with, there must be the same number of widely spaced particles before and after.
But suppose the hydrogen gas exists not as separate atoms but as hydrogen molecules, each made up of 2 atoms, and that chlorine consists of chlorine molecules, each made up of 2 atoms. In that case, the 100 atoms of hydrogen exist as 50 widely spaced particles (molecules), and the 100 atoms of chlorine also exist as 50 widely spaced particles. In the two gases, together, there are 100 widely spaced particles altogether, half of them hydrogen-hydrogen and the other half chlorine-chlorine.
If the two gases combine, they rearrange themselves to form hydrogen-chlorine, the atomic combination making up the hydrogen chloride molecule. Since there are 100 atoms of hydrogen altogether and 100 atoms of chlorine, there are 100 molecules of hydrogen chloride (each containing one of each kind of atom).
Now we find that 50 molecules of hydrogen plus 50 molecules of chlorine combine to form 100 molecules of hydrogen chloride. This matches the actually observed 1 volume of hydrogen plus 1 volume of chlorine yielding 2 volumes of hydrogen chloride.
All this takes for granted the fact that the particles of different gases, whether composed of single atoms or of combinations of atoms, are indeed equally spaced apart. If so, then equal numbers of particles of a gas (at a given temperature) would take up equal volumes no matter what the gas is.
The first to point out the necessity of this assumption that, in gases, equal numbers of particles take up equal volumes, was the Italian chemist Amedeo Avogadro (1776-1856). The assumption, advanced in 1811, is therefore known as Avogadro's hypothesis.
If the hypothesis is kept firmly in mind, it is possible to distinguish clearly between hydrogen atoms and hydrogen molecules (a pair of atoms) and between the atoms and molecules of other gases, too. For half a century after Avogadro's time, however, his hypothesis lay neglected, and the distinction between atoms and molecules of the important gaseous elements was not clearly defined in the minds of most chemists. Considerable uncertainty as to the value of the atomic weights of some of the most important elements persisted.
Fortunately, there were other keys to correctness in atomic weights. In 1818, for instance, a French chemist, Pierre Louis Dulong (1785-1838), and a French physicist, Alexis Therese Petit (1791-1820), working in collaboration, found one of them. They discovered that the specific heat of elements (the temperature rise that follows upon the absorption of a fixed quantity of heat) seemed to vary inversely with the atomic weight. That is, if element x had twice the atomic weight of element y, the temperature of element x would rise by only half as many degrees as that of element y, after both had absorbed the same quantity if heat. This is the law of atomic heat.
An element with an unknown atomic weight need then only have its specific heat measured, and at once one obtains an at least rough idea as to what its atomic weight is. This method worked only for solid elements, and not for every one of them, but it was better than nothing.
Again, a German chemist, Eihardt Mitscherlich (1794-1863), had discovered, by 1819, that compounds known to have similar compositions tend to crystallize together, as though molecules of one intermingled with the similarly shaped molecules of the other.
It followed from this law of isomorphism ("same shape") that if two compounds crystallized together and if the structure of only one of them was known, the structure of the second could be assumed to be similar. This property of isomorphic crystals enabled experimenters to correct mistakes that might arise from a consideration of combining weights alone, and served as a guide to the correct atomic weights.
Weights and Symbols
The turning point came with the Swedish chemist Jons Jakob Berzelius. He, next to Dalton himself, was chiefly responsible for the establishment of the atomic theory. About 1807, Berzelius threw himself into the determination of the exact elementary constitution of various compounds. By running many hundreds of analyses, he advanced so many examples of the law of definite proportions that the world of chemistry could no longer doubt its validity and had to accept, more or less willingly, the atomic theory which had grown directly out of the law of definite proportions.
Berzelius then set about determining atomic weights with more sophistication then Dalton had been able to do. In this project, Berzelius made use of the finding of Dulong and Petit and of Mitscherlich, as well as Gay-Lussac's law of combining volumes. (He did not, however, use Avogadro's hypothesis). Berzelius's first table of atomic weights, published in 1828, compared favorably, for all but two or three elements, with the accepted values of today.
An important difference between Berzelius's table and Dalton's was that Berzelius's values were not, generally, whole numbers.
Dalton's values, based on setting the atomic weight of hydrogen equal to 1, were all given as integers. This had led the English chemist William Prout (1785-1850) to suggest, in 1815, that all the elements were, after all, but composed of hydrogen. (His suggestion at first was made anonymously). The various atoms had different weights because they were made up of different numbers of hydrogen atoms in conglomeration. This came to be called Prout's hypothesis.
Berzelius's table seemed to destroy this attractive suggestion (attractive, because it reduced the growing number of elements to one fundamental substance, after the fashion of the ancient Greeks, and thereby seemed to increase the order and symmetry of the universe). Thus, on a hydrogen-equals-1 basis, the atomic weight of oxygen is roughly 15.9, and oxygen can scarcely be viewed as being made up of fifteen hydrogen atoms plus nine-tenths of a hydrogen atom.
For the next century, better and better tables of atomic weights were published, and Berzelius's finding that the atomic weights of the various elements were not integral multiples of the atomic weight of hydrogen became clearer and clearer.
In the 1860's, for instance, the Belgian chemist Jean Servais Stas (1813-1891) determined atomic weights more accurately than Berzelius had done. Then, at the beginning of the twentieth century, the American chemist Theodore William Richards (1868-1928), taking fantastic precautions, produced atomic weight values that may represent the ultimate accuracy possible to purely chemical methods.
If Berzelius's work had left any questions, that of Stas and Richards did not. The non-integral values of the atomic weights simply had to be accepted, and Prout's hypothesis seemed deader with each stroke. Yet even as Richards was producing his remarkably precise results, the whole meaning of atomic weight had to be re-evaluated, and Prout's hypothesis rose from its ashes.
The fact that atomic weights of the different elements were not simply related also brought up the question of the proper standard against which to measure the weight. Setting the atomic weight of hydrogen equal to 1 certainly seemed natural, and both Dalton and Berzelius tried it. Still this standard gave oxygen the uneven and inconvenient atomic weight of 15.9. It was oxygen, after all, that was usually used in determining the proportions in which particular elements combined, since oxygen combined easily with so many different elements.
To give oxygen a convenient integral atomic weight with minimum interference to the hydrogen = 1 standard, its weight was shifted from 15.9 to 16.0000. On this oxygen = 16 standard, the atomic weight of hydrogen was equal, roughly, to 1.008. The oxygen = 16 standard was retained till the mid-twentieth century, when a more logical one, making very slight changes in atomic weight, was accepted.
Once the atomic theory was accepted, one could picture substances as composed of molecules containing a fixed number of atoms of various elements. It seemed very natural to try to picture such molecules by drawing the correct number of little circles, each type of atom represented by a specific type of circle.
Dalton tried this symbolism. He let a simple circle represent an oxygen atom; one with a central dot a hydrogen atom; one with a vertical line a nitrogen atom; one that was solidly black, a carbon atom, and so on. Because it becomes difficult to think up sufficiently distinct circles for each element, Dalton let some be indicated by an appropriate letter. Thus sulfur was a circle containing an "S", phosphorus one that contained a "P", and so on.
Berzelius saw that the circles were superfluous and that the initials alone would do. He suggested, therefore, that each element possess a symbol standing both for the element generally and for a single atom of that element, and that his symbol consist primarily of the initial of the Latin name of the element. (Fortunately for English-speaking people, the Latin name is almost always very like the English name). Where two or more elements possess the same initial, a second letter from the body of the name night be added. These came to be the chemical symbols of the elements, and are today internationally agreed upon and accepted.
Thus, the chemical symbols of carbon, hydrogen, oxygen, nitrogen, phosphorus, and sulfur are C, H, O, N, P, and S, respectively. The chemical symbols of calcium and chlorine (with carbon pre-empting the simple capital) are Ca and Cl, respectively. Only where the Latin names differ from the English are the symbols less than obvious. Thus, the chemical symbols for gold, silver and lead are Au ("aurum"), Ag ("argentum"), and Pb ("plumbum"), respectively.
It is easy to use these symbols to indicate the number of atoms in a molecule. If the hydrogen molecule is made up of two atoms of hydrogen, it is H2. If the water molecule contains two atoms of hydrogen and one of oxygen, it is H2O. (The symbol without a number represents a single atom.) Again, carbon dioxide is CO2 and sulfuric acid is H2SO4, while hydrogen chloride is HCl. The chemical formulas of these simple compounds are self-explanatory.
Chemical formulas can be combined to form a chemical equation and describe a reaction. If one wishes to express the fact that carbon combines with oxygen to form carbon dioxide, one can write:
C + O2 --> CO2
Such equations must account for all the atoms if Lavoisier's law of conservation of mass is to be obeyed. In the equation just cited, for instance, you begin with an atom of C (carbon) and two atoms of O (the oxygen molecule), and you end with an atom of C and two atoms of O (the carbon dioxide molecule).
Suppose, though, you wished to say that hydrogen combined with chlorine to form hydrogen chloride. If this were written to simply
H2 + Cl2 --> HCl,
it could be pointed out that there were two atoms of hydrogen and two atoms of chlorine, to begin with, but only one of each at the conclusion. To write a balanced chemical equation, one must say:
H2 + Cl2 --> 2HCl.
In the same way, to describe the combination of hydrogen and oxygen to form water, we can write a balanced equation:
2H2 + O2 --> 2H2O
Meanwhile, the electric current, which had been used to such good effect by Nicholson and Carlisle, produced even more startling effects in the isolation of certain new elements.
Since Boyle's definition of "element" a century and a half before, substances qualifying as elements by that definition were discovered in astonishing numbers. More frustratingly, some substances were known which were not elements, yet contained undiscovered elements that chemists could not manage to study in isolation.
Thus, elements are frequently found in combination with oxygen (as oxides). To free the element it was necessary to remove the oxygen. If a second element with a stronger affinity for oxygen were to be introduced, perhaps the oxygen would leave the first element and become attached to the second. The method was found to work. Often carbon did the trick. Thus iron ore, which is essentially iron oxide, could be heated with coke (a relatively pure form of carbon). The carbon would combine with the oxygen to form carbon monoxide and carbon dioxide, and metallic iron would be left behind.
But now consider lime instead. From its properties lime, too, seems to be an oxide. However, no known element forms lime on combination with oxygen, and one must conclude that lime is a compound of an unknown element with oxygen. To isolate that unknown element, one might try to heat lime with coke; but if so, nothing happens. The unknown element hold oxygen so strongly that carbon atoms are powerless to snatch the oxygen atoms away. Nor could any other chemical strip lime of its oxygen.
It occurred to an English chemist, Humphry Davy (1778-1829), that what could not be pulled apart by chemicals might be forced apart by the strange power of the electric current, which could pry apart the water molecule with ease when chemicals were helpless.
Davy proceeded to construct an electric battery with over 250 metallic plates, the strongest ever built up to that time. He ran intense currents from this battery through solutions of the compounds suspected of containing unknown elements, but did so without result. He obtained only hydrogen and oxygen from the water.
Apparently, he had to eliminate water. However, when he used the solid substances themselves, he could not make a current pass through them. It occurred to him, finally, to melt the compounds and pass the current through the melt. He would then, so to speak, be using a waterless, conducting liquid.
This scheme worked. On October 6, 1807, Davy passed a current through molten potash (potassium carbonate) and liberated little globules of a metal he at once labeled potassium. (It was so active it pulled oxygen away from water, liberating hydrogen with enough energy to cause it to burst into flame). A week later, Davy isolated sodium from soda (sodium carbonate), an element only slightly less active than potassium.
In 1808, by using a somewhat modified method suggested by Berzelius, Davy isolated several metals from their oxides; magnesium from magnesia, strontium from strontia, barium from baryta, and calcium from lime. ("Calcium" is from the Latin word for lime.)
Among other things, Davy also showed that a certain greenish-gas, which Scheele had discovered a generation earlier and thought to be an oxide, was actually an element. Davy suggested the name chlorine, from the Greek word for "green". Davy also showed that hydrochloric acid, although a strong acid, contained no oxygen atom in its molecule, this disproving Lavoisier's suggestion that oxygen was a necessary component of acids.
Davy's work on electrolysis was extended by his assistant and protege, Michael Faraday (1791-1867), who grew to be an even greater scientist then his teacher. Faraday, in working with electrochemistry, introduced a number of terms that are still used today. It was he, for instance, who first termed the splitting of molecules by an electric current, electrolysis. At the suggestion of the English classical scholar William Whewell (1794-1866) Faraday named a compound or solution which could carry an electric current, an electrolyte. The metal rods or strips inserted into a melt or solution, he called electrodes; the electrode carrying a positive charge being an anode, the one carrying a negative charge the cathode.
The electric current was carried through the melt or solution by entities Faraday called ions (from a Greek word meaning "wanderer"). Those ions that traveled to the anode he called anions; those that traveled to the cathode were cations.
In 1832, he was able to announce the existence of certain quantitative relationships in electrochemistry. His first law of electrolysis stated: The mass of substance liberated at an electrode during electrolysis is proportional to the quantity of electricity driven through the solution. His second law of electrolysis stated: The weight of metal liberated by a given quantity of electricity is proportional to the equivalent weight of the metal.
Thus, if 2.7 times as much silver as potassium will combine with a given quantity of oxygen, then 2.7 times as much silver as potassium will be liberated from its compounds by a given quantity of electricity.
Faraday's law of electrolysis seemed to indicate, in the view of some chemists, that electricity could be subdivided into fixed, minimum units, as matter itself could. In other words, there were "atoms of electricity".
Suppose that when electricity passed through a solution, atoms of matter were dragged to either the cathode or the anode by "atoms of electricity". Suppose that, often, one "atom of electricity" sufficed to handle one atom of matter, but that sometimes two or even three "atoms of electricity" were required. In that case Faraday's laws of electrolysis could easily be explained.
It was not until the very end of the nineteenth century that this view was established and the "atoms of electricity" were located. Faraday, himself, however, was never enthusiastic about "atoms of electricity" or, indeed, about atomism in general. | http://www.3rd1000.com/history/atoms.htm | 13 |
71 | Formulas, Techniques, and Methods in Mathematics: Finding the People Behind the Numbers
The Pythagorean Theorem, Newton’s laws, and calculus are all mathematical
terms that most of us are familiar with, but the history and names behind these
terms are often forgotten. Many early mathematicians made genius contributions
to the field, and without their knowledge, math would be a completely different
discipline. Numerous advances in mathematics came from collaboration and small contributions
from many mathematicians over long periods of time. The following are descriptions
of some of the major contributors to the field of mathematics with external links
to further information on their personal lifves as well as the mathematic technique
they developed or helped further in the field.
Pythagoras (ca. 570 - ca. 490 B.C.)
One of the most famous names in mathematics, Pythagoras developed the core theorem
in trigonometry, the Pythagorean Theorem. Pythagoras was a Greek mathematician and
also a philosopher, who founded the religious movement Pythagoreanism, which combined
metaphysical beliefs with mathematical knowledge. Although his famous theorem was
generally accepted to be true about two hundred years before his birth by the Sumerians,
Pythagoras is given credit for proving that it is true. There is little known about
Pythagoras’s early life, and what is known is often fictionalized, since there
were few reliable sources written about him during his lifetime.
Greek Philosophers: Pythagoras
– A summary of the life of Pythagoras as a mathematician and philosopher.
and the Pythagoreans – A history of the Pythagorean School and the
importance of religion and mathematics to Pythagoras's school and followers.
- A philosophical look at Pythagoras’s mathematical and metaphysical idea,
the Tetraktys, a geometric figure made out of an equilateral triangle.
The Pythagorean Theorem – A proof of the Pythagorean Theorem, as well
as an introduction to Pythagorean triples.
Animated Proof of the Pythagorean
Theorem – An animation that illustrates a proof of the formula, c2
= a2 + b2 .
Euclid (ca. 325 - ca.220 B.C.)
Like Pythagoras, Euclid is another early Greek mathematician who lacks much of a
written record of his life. Based on a few mentions of Euclid in written records
from Greece, it is believed that he studied at Plato’s Academy in Athens.
Euclid made great contributions to the field of geometry through his book The Elements.
It was a popular geometry textbook up until the early twentieth century. In The
Elements, Euclid describes the principles of what is today known as Euclidian
Summary of Euclid – A
synopsis of Euclid’s life and a listing of his ten axioms or postulates of
Biography of Euclid
– A short biography of what is known about Euclid’s life and his works.
– Definitions, postulates, and propositions from Euclid’s work The Elements.
Interview with Euclid – An interview with Euclid written in first
person question and answer format that reveals biographical information.
Euclidian Geometry – Explanations of postulates from Euclid’s
The Elements which form the basis of Euclidean Geometry.
Leonardo Fibonacci (1170-1250)
Fibonacci was known by many names including Leonardo of Pisa, Leonardo Pisano Bigollo,
and Leonardo Bonacci, but his surname Fibonacci has stuck with him due to his namesake
discovery, the Fibonacci sequence. Fibonacci was an Italian-born mathematician famous
for his spread of the Hindu-Arabic numeral system throughout Europe because of his
writings and the Fibonacci sequence. As a young man, he traveled with his merchant
father and found that the Hindu-Arabic numeral system was more efficient than the
Roman numeral system. In his book, the Liber Abaci , Fibonacci shows why
this number system is more efficient and also introduces his famous sequence as
a problem involving the growth of the rabbit population. The Fibonacci sequence
is easily viewable in nature and everyday life and has become a popular topic in
movies, novels, and art.
Was Fibonacci? – A biography of Fibonacci and summary of his mathematical
Fibonacci Numbers Spelled Out
– Different derivations of the Fibonacci sequence mathematically spelled out.
in Nature – Pictures and diagram of examples of the Fibonacci sequence
The Fibonacci Association –
This association’s website, named after Fibonacci, contains information on
the Fibonacci numbers, Number Theory, and links to art displaying the Fibonacci
sequence in the spiral form.
The Golden Ratio –
An explanation of the Golden Ratio which is demonstrated in Fibonacci’s sequence.
The Fibonacci Sequence
Written – Shows how the Fibonacci Sequence can be written as a rule
and how to use the golden ratio to calculate Fibonacci numbers.
Pierre De Fermat (1601-1665)
Fermat was born in France and became a lawyer as a young man. After he received
his degree in civil law, he spent the remainder of his life working as the councillor
at the High Court of Judicature in Toulouse. Although he maintained the status
of “amateur mathematician” his work led to developments in infinitesimal
calculus, made contributions to Number Theory, analytic geometry, and the adequality
technique. Fermat also worked with Blaise Pascal, with whom he had a close relationship,
to discover the theory of probability. Fermat left his notable theorem, Fermat’s
Last Theorem unproven, and findng the proof of this theorem became the ultimate
goal of many mathematicians. Finally in the late twentieth century, mathematician
Andrew Wiles was able to write the proof of Fermat's theorem.
of Pierre de Fermat – A summary of Fermat’s personal life as
well as the role he played in mathematics.
What is the Last Theorem – A description with diagrams of what Fermat’s
Last Theorem entails.
Fermat’s Last Theorem
– This contains a lengthy description of Fermat’s theorem, a shortened
proof of the theorem, and details about the race to find the proof.
Andrew Wiles – Princeton’s faculty profile of the mathematician
who was able to prove Fermat’s Last Theorem.
Pierre de Fermat –
Biography of Fermat with diagrams of some of his mathematical concepts.
Blaise Pascal (1623-1662)
Pascal was a French mathematician, philosopher, inventor, and physicist. His father
was a tax commissioner, which prompted Pascal to invent a calculating machine as
a young man. He was a follower of Jansenism, a movement within Catholicism. His
primary contributions to mathematics were Pascal’s triangle and his collaboration
with Fermat on the theory of probability. Due to Pascal’s deep religious beliefs,
in 1654, he stopped all his work in mathematics, but broke this constrainment a
few years later when he offered up a competition to see who could find the numerical
derivation of a cycloid; under a pseudonym, he submitted the winning answer. In
honor of Pascal’s contributions to math and science, Pascal’s law, Pascal,
the unit of pressure, and the programming language also referred to by his given
surname were named so for his accomplishments.
– A summary and timeline of major events in Blaise Pascal’s life.
– The European Graduate Schools’ biography of Pascal.
– This page gives the history, construction, patterns, and applications of
The Fibonacci sequence in Pascal’s
Triangle – A diagram that shows how the Fibonacci sequence appears
in Pascal’s Triangle.
The Cycloid (PDF) – Information
about the cycloid and the math behind it.
Sir Issac Newton (1643-1727)
Newton was engaged in a plethora of different scientific fields including optics,
mathematics, mechanics, and gravitation, and his famous laws are now prevelently
used in many scientific and mathematical subjects. Newton was born in England. As
a boy, his mother wished for him to become a farmer, but he was able to go on with
his schooling and eventually graduated from Trinity College in Cambridge. Newton’s
famous three laws of motion formularized inertia, applied force and momentum, and
acceleration. Often illustrated in popular culture, Newton himself spread the idea
that he was able to formulate the law of gravitation after an apple fell on his
head from the branch of a tree overhead; although it is believed it didn’t
happen quite this way since it took Newton about twenty years to fully write the
theory. The unit of force, the newton, is named for Newton’s contribution
of the first and second laws of motion having to do with force.
Sir Issac Newton – A biography of Newton with a summary of his achievements.
The Mind of Issac
Newton – A Multimedia project from McMaster University with information
on Newton’s innovations.
of Motion – This provides a simplified explanation of Newton’s
three laws of motion.
Newton: The Universal Laws of Gravitation – An explanation of how
Newton came about discovering the universal laws of gravitation and the math behind
Issac Newton’s Life –
Information on Issac Newton’s life and the fields he made breakthroughs in.
Leonhard Euler (1701-1783)
Considered by many to be one of the greatest mathematicians of all times, Swiss-born
Leonhard Euler made advancements in infinitesimal calculus, graph theory, and is
notable for much of the modern day notation that is currently used in mathematics.
Euler also made contributions to physics and astronomy. Working with his mathematician
friend Daniel Bernoulli, he helped develop the Euler-Bernoulli beam equation. Throughout
his lifetime, Euler wrote a tremendous amount of books on mathematics. He is immortalized
on postage stamps in Germany, Switzerland, and Russia, and his picture was even
featured on a series of the Swiss banknote.
Leonhard Euler – A
summary of Leohard Euler's life and his contributions to mathematics and science.
Method – Explanations and examples of Euler’s method in
Euler’s Method - Formulas – Information about Euler’s
method and how it is used. Also contains a summary of Euler’s method in formulas.
– An explanation of Euler’s equation which shows the relationship between
the trigonometric function and complex exponential function.
– Details about Euler’s identity and a corollary of the identity and
how they can be used.
Jean Baptiste Joseph Fourier (1768-1830)
Fourier was a French-born mathematician and physicist. At a young age he became
an orphan, but was still able to become obtain a commendable education. He spent
a portion of his life in Egypt as governor after venturing on an expedition with
Napoleon Bonaparte. Fourier performed many experiments on the propagation of heat.
Today he is known for discovering the “greenhouse effect.” Fourier worked
on determinate equations, but never finished them before his death. His work was
later finished by a few other mathematicians, and today Fourier analysis is named
after his original work.
Baptiste Joseph Fourier – A short biography of the famous mathematician.
Fourier Series Demonstrated –
A java applet that demonstrates the Fourier series.
Fourier Series Tutorials – Listings
of interactive flash programs that can help one learn all about Fourier series.
Fourier Transform Table
(PDF) – A table of the Fourier transform.
Heating by the
Greenhouse Effect – An introduction to the greenhouse effect with
a description of how Fourier was able to discover this idea.
Carl Friedrich Gauss (1777-1855)
Gauss was born in Germany and was a very precocious child. He made his first major
mathematical discoveries in his teenage years. Carl Friedrich Gauss led the life
of a perfectionist and published little of his mathematical material due to this
personality trait. His famous Gaussian distribution is more familiarly known today
as the bell curve with which teachers often base their grading systems around. Gauss
also collaborated with the physicist, Wilhelm Weber. Their work led to advancements
in magnetism, and together they invented the electromagnetic telegraph. Around the
time of their collaborative work, Gauss also formulated Gauss’s Law, which
later became one of the four laws of Maxwell’s equations, the foundations
of all modern-day electrical technologies. His legacy has been commemorated in many
forms such as in his image on stamps and currency, a crater on the Moon named Gauss,
and an observation tower in Germany named the Gauss Tower.
Johann Carl Friedrich Gauss
– A summary of Gauss’s life and short timeline of major highlights in
Gauss and His Life Works - A biography of Gauss’s
life with pictures and further links to more information on the mathematician.
Carl Friedrich Gauss
– Information on his major achievements and pictures of stamps and currency
with his picture dedicated to his honor.
Function – The math behind the Gaussian distribution function.
The Normal Distribution
– Characteristics of the Gaussian distribution function, or the more commonly
named bell curve, and an example of one of its applications.
Equations – An examination of Maxwell’s equation which shows
Gauss’s law composing part of the equations. | http://www.peoplefinders.com/article-famous-people-throughout-history-mathematicians.aspx | 13 |
109 | Transcript: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: So we're going on to the third unit here. So we're getting started with Unit 3. And this is our intro to integration. It's basically the second half of calculus after differentiation. Today what I'll talk about is what are known as definite integrals. Actually, it looks like, are we missing a bunch of overhead lights? Is there a reason for that? Hmm. Let's see. Ahh. Alright. OK, that's a little brighter now. Alright. So the idea of definite integrals can be presented in a number of ways. But I will be consistent with the rest of the presentation in the course. We're going to start with the geometric point of view. And the geometric point of view is, the problem we want to solve us to find the area under a curve. The other point of view that one can take, and we'll mention that at the end of this lecture, is the idea of a cumulative sum. So keep that in mind that there's a lot going on here. And there are many different interpretations of what the integral is.
Now, so let's draw a picture here. I'll start at a place a and end at a place b. And I have some curve here. And what I have in mind is to find this area here. And, of course, in order to do that, I need more information than just where we start and where we end. I also need the bottom and the top. By convention, the bottom is the x axis and the top is the curve that we've specified, which is y = f(x). And we have a notation for this, which is the notation using calculus for this as opposed to some geometric notation. And that's the following expression. It's called an integral, but now it's going to have what are known as limits on it. It will start at a and end at b. And we write in the function f(x) dx. So this is what's known as a definite integral. And it's interpreted geometrically as the area under the curve. The only difference between this collection of symbols and what we had before with indefinite integrals is that before we didn't specify where it started and where it ended.
Now, in order to understand what to do with this guy, I'm going to just describe very abstractly what we do. And then carry out one example in detail. So, to compute this area, we're going to follow initially three steps. First of all, we're going to divide into rectangles. And unfortunately, because it's impossible to divide a curvy region into rectangles, we're going to cheat. So they're only quote-unquote rectangles. They're almost rectangles. And the second thing we're going to do is to add up the areas. And the third thing we're going to do is to rectify this problem that we didn't actually hit the answer on the nose. That we were missing some pieces or were choosing some extra bits. And the way we'll rectify that is by taking the limit as the rectangles get thin. Infinitesimally thin, very thin.
Pictorially, again, that looks like this. We have a and our b, and we have our guy here, this is our curve. And I'm going to chop it up. First I'm going to chop up the x axis into little increments. And then I'm going to chop things up here. And I'll decide on some rectangle, maybe some staircase pattern here. Like this. Now, I don't care so much. In some cases the rectangles overshoot; in some cases they're underneath. So the new area that I'm adding up is off. It's not quite the same as the area under the curve. It's this region here. But it includes these extra bits here. And then it's missing this little guy here. This little bit there is missing. And, as I say, these little pieces up here, this a little bit up here is extra. So that's why we're not really dividing up the region into rectangles. We're just taking rectangles. And then the idea is that as these get thinner and thinner, the little itty bitty amounts that we miss by are going to 0. And they're going to be negligible. Already, you can see it's kind of a thin piece of area, so we're not missing by much. And as these get thinner and thinner, the problem goes away and we get the answer on the nose in the limit.
So here's our first example. I'll take the first interesting curve, which is f ( x) = x^2. I don't want to do anything more complicated than one example, because this is a real labor here, what we're going to go through. And to make things easier for myself, I'm going to start at a = 0. But in order to see what the pattern is, I'm going to allow b to be arbitrary.
Let's draw the graph and start breaking things up. So here's the parabola, and there's this piece that we want, which is going to stop at this place, b, here. And the first step is to divide into n pieces. That means, well, graphically, I'll just mark the first three. And maybe there are going to be many of them. And then I'll draw some rectangles here, and I'm going to choose to make the rectangles all the way from the right. That is, I'll make us this staircase pattern here, like this. That's my choice. I get to choose whatever level I want, and I'm going to choose the right ends as the shape of the staircase. So I'm overshooting with each rectangle.
And now I have to write down formulas for what these areas are. Now, there's one big advantage that rectangles have. And this is the starting place. Which is that it's easy to find their areas. All you need to know is the base and the height, and you multiply, and you get the area. That's the reason why we can get started with rectangles. And in this case, these distances, I'm assuming that they're all equal, equally spaced, intervals. And I'll always be doing that. And so the spacing, the bases, the base length, is always b / n. All equal intervals. So that's the base length. And next, I need the heights. And in order to keep track of the heights, I'm going to draw a little table here, with x and f ( x), and plug in a few values just to see what the pattern is. The first place here, after 0, is b / n. So here's b / n, that's an x value. And the f ( x) value is the height there. And that's just, I value it f(x), f)x) = x^2. And that's (b / n)^2. And similarly, the next one is 2b / n. And the value here is (2b / n^2. That's this. This height here is 2b / n. That's the second rectangle. And I'll write down one more. 3b / n, that's the third one. And the height is (3b / n^2. And so forth.
Well, my next job is to add up these areas. And I've already prepared that by finding out what the base and the height is. So the total area, or the sum of the areas, let's say, of these rectangles, is - well, the first one is (b / n) ( b / n)^2. The second one is 2b / n - I'm sorry, is (b / n)( 2b / n)^2. And it just keeps on going. And the last one is (b / n)( nb / n)^2. So it's very important to figure out what the general formula is. And here we have a base. And here we have a height, and here we have the same kind of base, but we have a new height. And so forth. And the pattern is that the coefficient here is 1, then 2, then 3, all the way up to n. The rectangles are getting taller and taller, and this one, the last one is the biggest.
OK, this is a very complicated gadget. and the first thing I want to do is simplify it and then I'm actually going to evaluate it. But actually I'm not going to evaluate it exactly. I'm just going to evaluate the limit. Turns out, limits are always easier. The point about calculus here is that these rectangles are hard. But the limiting value is an easy value. So what we're heading for is the simple formula, as opposed to the complicated one. Alright, so the first thing I'm going to do is factor out all these b / n factors. There's a b / n here, and there's a (b / n)^2. So all told, we have a (b / n)^3. As a common factor. And then the first term is 1, and the second term, what's left over, is 2^2. 2^2. And then the third term would be 3^2, although I haven't written it. In the last term, there's an extra factor of n^2. In the numerator. OK, is everybody with me here?
Now, what I'd like to do is to eventually take the limit as n goes to infinity here. And the quantity that's hard to understand is this massive quantity here. And there's one change that I'd like to make, but it's a very modest one. Extremely minuscule. Which is that I'm going to write 1, just to see that there's a general pattern here. Going to write 1 as 1^2. And let's put in the 3 here, why not. And now I want to use a trick. This trick is not completely recommended, but I will say a lot more about that when we get through to the end. I want to understand how big this quantity is. So I'm going to use a geometric trick to draw a picture of this quantity. Namely, I'm going to build a pyramid. And the base of the pyramid is going to be n by n blocks. So imagine we're in Egypt and we're building a pyramid. And the next layer is going to be n - 1 by n - 1. So this next layer in is n minus 1 by n minus 1. So the total number of blocks on the bottom is n squared. That's this rightmost term here. But the next term, which I didn't write in but maybe I should, the next to the last term was this one. And that's the second layer that I've put on.
Now, this is, if you like, the top view. But perhaps we should also think in terms of a side view. So here's the same picture, we're starting at n and we build up this layer here. And now we're going to put a layer on top of it, which is a little shorter. So the first layer is of length n. And the second layers is of length n - 1, and then on top of that we have something of length n - 2, and so forth. And we're going to pile them up. So we pile them up. All the way to the top, which is just one giant block of stone. And that's this last one, 1^2. So we're going backwards in the sum. And so I have to build this whole thing up. And I get all the way up in this staircase pattern to this top block, up there.
So here's the trick that you can use to estimate the size of this, and it's sufficient in the limit as n goes to infinity. The trick is that I can imagine the solid thing underneath the staircase, like this. That's an ordinary pyramid, not a staircase pyramid. Which is inside. And this one is inside. And so, but it's an ordinary pyramid as opposed to a staircase pyramid. And so, we know the formula for the volume of that. Because we know the formula for volumes of cones. And the formula for the volume of this guy, of the inside, is 1/3 base times height. And in that case, the base here - so that's 1/3, and the base is n by n, right? So the base is n^2. That's the base. And the height, it goes all the way to the top point. So the height is n. And what we've discovered here is that this whole sum is bigger than 1/3 n^3.
Now, I claimed that - this line, by the way has slope 2. So you go 1/2 over each time you go up 1. And that's why you get to the top. On the other hand, I can trap it on the outside, too, by drawing a parallel line out here. And this will go down 1/2 more on this side and 1/2 more on the other side. So the base will be (n + 1) by (n + 1) of this bigger pyramid. And it'll go up 1 higher. So on the other end, we get that this is less than 1/3 (n + 1)^3. Again, (n + 1)^2 ( n + 1) again this is a base times a height. Of this bigger pyramid. Yes, question.
STUDENT: [INAUDIBLE] and then equating it to volume.
PROFESSOR: The question is, it seems as if I'm adding up areas and equating it to volume. But I'm actually creating volumes by making these honest increments here. That is, the base is n but the height is 1. Thank you for pointing that out. Each one of these little staircases here has exactly height 1. So I'm honestly sticking blocks there. They're sort of square blocks, and I'm lining them up. And I'm thinking of n by n cubeds, if you like. Honest cubes, there. And the height is 1. And the base is n^2.
Alright, so I claim that I've trapped this guy in between two quantities here. And now I'm ready to take the limit. If you look at what our goal is, we want to have an expression like this. And I'm going to - this was the massive expression that we had. And actually, I'm going to write it differently. I'll write it as b^3( 1^2 + 2^2 + n^2 / n^3). I'm going to combine all the n's together. Alright, so the right thing to do is to divide what I had up there. Divide by n^3 in this set of inequalities there. And what I get here is 1/3 < (1 + 2^2 + 3^2 + n^2 / n^3) < 1/3 ( n + 1)^3 / n^3. And that's 1/3( 1 + (1 / n))^3.
And now, I claim we're done. Because this is 1/3, and the limit, as n goes to infinity, of this quantity here, is easily seen to be, this is, as n goes to infinity, this goes to 0. So this also goes to 1/3. And so our total here, so our total area, under x^2, which we sometimes might write the integral from to b x^2 / dx, is going to be equal to - well, it's this 1/3 that I've got. But then there was also a b^3 there. So there's this extra b cubed here. So it's 1/3 b^3. That's the result from this whole computation. Yes, question.
PROFESSOR: So that was a very good question. The question is, why did we leave the b / n^3 out, for this step. And a part of the answer is malice aforethought. In other words, we know what we're heading for. We know, we understand, this quantity. It's all one thing. This thing is a sum, which is growing larger and larger. It's not what's called a closed form. So, the thing that's not known, or not well understood, is how big is this quantity here. 1^2 + 2^2. The sum of the squares. Whereas, this is something that's quite easy to understand. So we factor it out. And we analyze carefully the piece which we don't know yet, how big it is. And we discovered that it's very, very similar to n^3. But it's more similar to 1/3 n^3. It's almost identical to 1/3 n^3. This extra piece here. So that's what's going on. And then we match that. Since this thing is very similar to 1/3 n^3 we cancel the n^3's and we have our result.
Let me just mention that although this may seem odd, in fact this is what you always do if you analyze these kinds of sum. You always factor out whatever you can. And then you still are faced with a sum like this. So this will happen systematically, every time you're faced with such a sum.
OK, now I want to say one more word about notation. Which is that this notation is an extreme nuisance here. And it's really sort of too large for us to deal with. And so, mathematicians have a shorthand for it. Unfortunately, when you actually do a computation, you're going to end up with this collection of stuff anyway. But I want to just show you this summation notation in order to compress it a little bit.
The idea of summation notation is the following. So this thing tends, the ideas are following. I'll illustrate it with an example first. So, the general notation is the sum of ai, i = 1 to n = a1 + a2 + ... plus an. So this is the abbreviation. And this is a capital Sigma. And so, this quantity here, for instance, is (1 / n^3) the sum i^2, i = 1 to n. So that's what this thing is equal to. And what we just showed is that that tends to 1/3 as n goes to infinity. So this is the way the summation notation is used. There's a formula for each of these coefficients, each of these entries here, or summands. And then this is just an abbreviation for what the sum is. And this is the reason why I stuck in that 1^2 at the beginning, so that you could see that the pattern worked all the way down to i = 1. It isn't an exception to the rule. It's the same as all of the others.
Now, over here, in this board, we also had one of these extremely long sums. And this one can be written in the following way. And I hope you agree, this is rather hard to scan. But one way of writing it is, it's the sum from i = 1 to n of, now I have to write down the formula for the general term. Which is (b / n)( ib / n)^2. So that's a way of abbreviating this massive formula into one which is just a lot shorter. And now, the manipulation that I performed with it, which is to factor out this (b / n)^3, is something that I'm perfectly well allowed to do also over here. This is the distributive law. This, if I factor out b^3 / n^3, I'm left with the sum i = 1 to n of i^2, right? So these notations make it a little bit more compact. What we're dealing with. The conceptual phenomenon is still the same. And the mess is really still just hiding under the rug. But the notation is at least fits with fewer symbols, anyway.
So let's continue here. I've giving you one calculation. And now I want to fit it into a pattern. And here's the thing that I'd like to calculate. So, first of all let's try the case, so I'm going to do two more examples. I'll do two more examples, but they're going to be much, much easier. And then things are going to get much easier from now on. So, the second example is going to be the function f(x) = x. If I draw that, that's this function here, that's the line with slope 1. And here's b. And so this area here is the same as the area of the triangle with base b and height b. So the area is equal to 1/2 b * b, so this is the base. And this is the height. We also know how to find the area of triangles. And so, the formula is 1/2 b^2.
And the third example, notice, by the way, I didn't have to do this elaborate summing to do that, because we happen to know this area. The third example is going to be even easier. f(x) = 1. By far the most important example, remarkably, when you get to 18.02 and multivariable calculus, you will forget this calculation. Somehow. And I don't know why, but it happens to everybody. So, the function is just horizontal, like this. Right? It's the constant 1. And if we stop it at b, then the area we're interested in is just this, from to b. And we know that this is height 1, so this is area, is base, which is b * 1. So it's b.
Let's look now at the pattern. We're going to look at the pattern of the function, and it's the area under the curve, which is this, has this elaborate formula in terms of, so this is just the area under the curve. Between and b. And we have x^2, which turned out to be b^3 / 3. And we have x, which turned out to be - well, let me write them over just a bit more to give myself some room. x, which turns out to be b^2/ 2. And then we have 1, which turned out to be b. So this, I claim, is suggestive. If you can figure out the pattern, one way of making it a little clearer is to see that x = x^ 1. And 1 = x ^ . So this is the case, 0, 1 and 2. And b = b ^ 1 / 1. So, if you want to guess what happens when f(x) = x^3, well if it's 0, you do b ^ 1 / 1. If it's 1, you do b ^ 2 / 2. If it's 2, you do b ^ 3 / 3. So it's reasonable to guess that this should be b ^ 4 / 4. That's a reasonable guess, I would say.
Now, the strange thing is that in history, Archimedes figured out the area under a parabola. So that was a long time ago. It was after the pyramids. And he used, actually, a much more complicated method than I just described here. And his method, which is just fantastically amazing, was so brilliant that it may have set back mathematics by 2,000 years. Because people were so, it was so difficult that people couldn't see this pattern. And couldn't see that, actually, these kinds of calculations are easy. So they couldn't get to the cubic. And even when they got to the cubic, they were struggling with everything else. And it wasn't until calculus fit everything together that people were able to make serious progress on calculating these areas. Even though he was the expert on calculating areas and volumes, for his time.
So this is really a great thing that we now can have easy methods of doing it. And the main thing that I want to tell you is that's we will not have to labor to build pyramids to calculate all of these quantities. We will have a way faster way of doing it. This is the slow, laborious way. And we will be able to do it so easily that it will happen as fast as you differentiate. So that's coming up tomorrow. But I want you to know that it's going to be. However, we're going to go through just a little pain before we do it. And I'll just tell you one more piece of notation here.
So you need to have a little practice just to recognize how much savings we're going to make. But never again will you have to face elaborate geometric arguments like this. So let me just add a little bit of notation for definite integrals. And this goes under the name of Riemann sums. Named after a mathematician from the 1800s. So this is the general procedure for definite integrals. We divide it up into pieces. And how do we do that? Well, so here's our a and here's our b. And what we're going to do is break it up into little pieces. And we're going to give a name to the increment. And we're going to call that delta x.
So we divide up into these. So how many pieces are there? If there are n pieces, then the general formula is always the delta x is 1 / n times the total length. So it has to be b - a / n. We will always use these equal increments, although you don't absolutely have to do it. We will, for these Riemann sums. And now there's only one bit of flexibility that we will allow ourselves. Which is this. We're going to pick any height of f between. In the interval, in each interval. So what that means is, let me just show it to you on the picture here. Is, I just pick any value in between, I'll call it ci, which is in there. And then I go up here. And I have the level, which is f( ci). And that's the rectangle that I choose. In the case that we did, we always chose the right-hand, which turned out to be the largest one. But I could've chosen some level in between. Or even the left-hand end. Which would have meant that the staircase would've been quite a bit lower. So any of these staircases will work perfectly well. So that means were picking f ( ci), and that's a height. And now we're just going to add them all up. And this is the sum of the areas of the rectangles, because this is the height. And this is the base.
This notation is supposed to be, now, very suggestive of the notation that Leibniz used. Which is that in the limit, this becomes an integral from a to b of f(x) dx. And notice that the delta x gets replaced by a dx. So this is what happens in the limit. As the rectangles get thin. So that's as delta x goes to 0. And these gadgets are called Riemann sums. This is called a Riemann sum. And we already worked out an example. This very complicated guy was an example of a Riemann sum. So that's a notation. And we'll give you a chance to get used to it a little more when we do some numerical work at the end.
Now, the last thing for today is, I promised you an example which was not an area example. I want to be able to show you that integrals can be interpreted as cumulative sums. Integrals as cumulative sums. So this is just an example. And, so here's the way it goes. So we're going to consider a function f, we're going to consider a variable t, which is time. In years. And we'll consider a function f( t), which is in dollars per year. Right, this is a financial example here. That's the unit here, dollars per year. And this is going to be a borrowing rate. Now, the reason why I want to put units in here is to show you that there's a good reason for this strange dx, which we append on these integrals. This notation. It allows us to change variables, it allows this to be consistent with units. And allows us to develop meaningful formulas, which are consistent across the board. And so I want to emphasize the units in this when I set up this modeling problem here.
Now, you're borrowing money. Let's say, every day. So that means delta t = 1/365. That's almost 1 / infinity, from the point of view of various purposes. So this is how much you're borrowing. In each time increment you're borrowing. And let's say that you borrow, your rate varies over the year. I mean, sometimes you need more money sometimes you need less. Certainly any business would be that way. And so here you are, you've got your money. And you're borrowing but the rate is varying. And so how much did you borrow? Well, in Day 45, which is 45/365, you borrowed the following amount. Here was your borrowing rate times this quantity. So, dollars per year. And so this is, if you like, I want to emphasize the scaling that comes about here. You have dollars per year. And this is this number of years. So that comes out to be in dollars. This final amount. This is the amount that you actually borrow. So you borrow this amount. And now, if I want to add up how much you get, you've borrowed in the entire year. That's this sum. i = 1 to 365 of f of, well, it's (i / 365) delta t. Which I'll just leave as delta t here. This is total amount borrowed.
This is kind of a messy sum. In fact, your bank probably will keep track of it and they know how to do that. But when we're modeling things with strategies, you know, trading strategies of course, you're really some kind of financial engineer and you want to cleverly optimize how much you borrow. And how much you spend, and how much you invest. This is going to be very, very similar to the integral from to 1 of f (t) dt. At the scale of 1/35, it's probably, 365, it's probably enough for many purposes. Now, however, there's another thing that you would want to model. Which is equally important. This is how much you borrowed, but there's also how much you owe the back at the end of the year. And the amount that you owe the bank at the end of the year, I'm going to do it in a fancy way. It's, the interest, we'll say, is compounded continuously. So the interest rate, if you start out with P as your principal, then after time t, you owe, so borrow P, after time t, you owe P e ^ rt, where r is your interest rate. Say, 0.05 per year.
That would be an example of an interest rate. And so, if you want to understand how much money you actually owe at the end of the year, at the end of the year what you owe is, well, you borrowed these amounts here. But now you owe more at the end of the year. You owe e ^ r times the amount of time left in the year. So the amount of time left in the year is 1 - (i / 365). Or 365 - i days left. So this is (1 - i / 365). And this is what you have to add up, to see how much you owe. And that is essentially the integral from to 1. The delta t comes out. And you have here e ^ r (1 - t), so the t is replacing this i / 365, f (t) dt. And so when you start computing and thinking about what's the right strategy, you're faced with integrals of this type. So that's just an example. And see you next time. Remember to think about questions that you'll ask next time. | http://xoax.net/math/crs/single_variable_calculus_mit/lessons/Lecture18/ | 13 |
53 | This document introduces some basic XML principles and is for those that don't fully know XML. Most of these principles are required to use the macro language. For a more detailed understanding, visit the W3C XML site..
Every XML document should start with:
XML is used to represent a nested or tree structure. The basic rule of XML
is that a valid XML document must have ONLY one "root" element. A root
element is simply an element that starts the document. Just as everything
is supposed to have a beginning and end, a valid XML document has one too. An element
is an alpha-numeric set of characters surrounded by < and >.
<?xml version="1.0"?> <START>
<?xml version="1.0"?> <START> </START>
The above example can also be rewritten like this:
Now let's look at a bit more of a complex XML document that has nested elements.
<testcase> <name>ExampleTestCase</name> <httpunit-session> <some-function-point/> </httpunit-session> </testcase>
The above example represents a hierarchical structure, as do all XML documents. The biggest difference between a simple hierarchical diagram and an XML document is that most hierarchical diagrams don't show where the parent-child relationship ends. It is assumed visually by indentation or some other means. On the other hand, a well formatted XML document will normally include indentations for easy reading, but it is not a requirement so it requires an end tag for every open tag.
In the above example, it can be said that there is a testcase and the testcase is made up of a name and an httpunit-session. The session, in turn, is made up of some-function-point. It is that simple!
The text between element tags must not have <, >, or & inside them. If it is required that these characters be in the text, then the text must look something like:
<?xml version="1.0"?> <testcase> <name><![CDATA[Some text with some < invalid > characaters in it.& ]]></name> <httpunit-session> <some-function-point/> </httpunit-session> </testcase>
An Element Attribute is a more restrictive way to describe something. In the above example, we used a <testcase> that has a name, and a session. While a session is somewhat complex and would be very difficult to turn into a set of characters, the <name> is simply a set of characters. In this case, it would be appropriate to put it into the <testcase> tag as an attribute:
<?xml version="1.0"?> <testcase name="ExampleTestCase"> <httpunit-session application="sourceforge"> <some-function-point functionId="A function point that does something" /> </httpunit-session> </testcase>
The drawback to using an element attribute is that an attribute can have none of the invalid characters mentioned above. The CDATA marker cannot be used as an attribute's value.This obviously restricts the value of the attribute much more than that of an element's value.
An XML document does not need to be well-formatted to well-formed. The above example can also be written like:
<?xml version="1.0"?> <testcase name="ExampleTestCase"><httpunit-session application="sourceforge"> <some-function-point functionId="A function point that does something"/></httpunit-session> </testcase>
<?xml version="1.0"?> <testcase name="ExampleTestCase"> <httpunit-session application="sourceforge"> <some-function-point functionId="A function point that does something"/> </httpunit-session> </testcase>
XML also allows a document to be validated against a predefined structure. There are currently two types of definition documents, Document Type Definition (DTD) and XML Schema. Basically, both of these document definitions accomplish the same thing, but in different ways. Without a DTD, an XML document's structure is not restricted except by the syntax rule mentioned earlier.
The original reason for the creation of XML was to store data in an implementation-independent format. In other words, being able to store the data in one language like C++ and then being able to read it in through different language like Java was a requirement of XML.
Normally, this would mean different implementations for parsing the document at the software level. Thus the idea of a DTD was introduced and a couple of years later XML schema was introduced.
This puts the validation in the hands of the XML parser instead of the software that reads or writes the XML. In other words, using a DTD to restrict the structure of XML while trying to write some unsupported element would cause an error and XML could not be written to. Now data can be written in a standard format and a generic parser can be used to read and write this data without need to implement an in-house parser.
Jameleon does not restrict the structure of the XML since the XML is used as a scripting language and the idea of creating function points on the fly would require the DTD to be specific for each customer. However, Jameleon does use this idea to know how to handle custom function points.
Since it is likely that two seemingly independent XML documents that describe different things could actually be used together to describe more complex data, the chance of the element names of each document type being the same is pretty high.
For example, let's say there is a trucking company that uses an XML document to describe it's shipping orders. This document would more than likely contain an element that describes the contents, size, and weight of the cargo it is shipping. It would also, more than likely describe the entity shipping and receiving the package. Now let's say there is a farmer that sells fruit. This farmer uses the before-mentioned trucking company to send his fruit across the country and he uses XML to describe his customers, products, and orders.
Now let's say the two companies decide to use the same mechanism to share information. It is more than likely that the two different XML data-types above would use some of the same structure to describe their customers, packages, and products. And if they didn't, the data would mean very similar things and would likely become difficult to understand if the two put together. The farmer would want to keep his own data description and the trucking company would also want to keep their data description since they are working with several companies that ship various products. Wouldn't it be nice if their two data definitions could be included in the same XML document yet remain independent?
XML provides a solution to this dilemma through namespaces. A namespace is basically the idea of defining a document's definition of elements as all starting with a different sequence of characters than it normal would. However, the XML parser is smart enough to give the software the same data structure it is expecting. For example, if we had a simple DTD that defined one single element as <element/> and set a namespace for that DTD to be "jmln", then that element would now be expressed in the document as <jmln:element/>, but by the time it got to application that reads the XML in, it would only get <element/>. that DTD,
Below is an example of how Jameleon uses namespaces:
<j:jelly xmlns:qa="jelly:jameleon" xmlns:j="jelly:core">
As might be guessed, the "xmlns" stands for XML Namespace. This above element imports two different xml document definitions. One for "jelly:core" (defined by xmlns:j="jelly:core") and one for "jelly:jameleon" (defined by "xmlns:qa="jelly:jameleon"). The import of "jelly:core" states that all elements described in "jelly:core" must start with "j". The import of "jelly:jameleon" states that all elements described in jelly:jameleon must start with "qa".
Notice the : after xmlns. This is the part that states what all elements of the type used after the = must begin with. Normally Jameleon would use the XML element, <testcase>. However, the namespace definition of jelly:jameleon, requires all Jameleon-based elements to begin with "qa". So <testcase> turns into <qa:testcase>.
By excluding the : after xmlns, a default namespace can be defined. This means that the tags for that namespace don't need to begin with anything. For example: <testcase xmlns="jelly:core"> works just fine. To make the Jameleon tags the default and the Jelly tags begin with j:, do this: <testcase xmlns="jelly:core" xmlns:j="jelly:core">
The value of the xmlns:qa is the location of the definition of the document. In this case, "jelly:jameleon" is the location of the structure definition. A namespace may be defined in any element. However, the namespace will only be valid for children of that element. In other words:
An XML fragment is considered an XML document that doesn't have a root element or even if it does, that root may become a child of another element. The idea of including existing XML into another document is very flexible and can be used as a workaround for having a set of the same data in many XML documents.
Including an XML fragment in another XML document is actually quite simple. First, the document that is going to be imported, must be defined and given a name. This definition of this document must be included after the prolog and before the root element of the XML document. Second, the name is used to include that xml fragment in the document wherever desired (as long as it stays a valid XML after the import).
<httpunit-session application="sourceforge"> <some-function-point functionId="A function point that does something" /> </httpunit-session>
is saved to the file some.xml.fragment.
<!DOCTYPE project [ <!ENTITY fragToInclude SYSTEM "some.xml.fragment"> ]> <testcase xmlns="jelly:jameleon" name="ExampleTestCase"> &fragToInclude; </testcase>
In this file, "some.xml.fragment" is given the name "fragToInclude" at the top of the file. The & followed by the name or fragToInclude followed by a ; or just &fragToInclude; is used as the actual "import here" statement.
Once the XML is parsed, the document is brought together to look like:
<testcase xmlns="jelly:jameleon"> <httpunit-session application="sourceforge"> <q:some-function-point functionId="A function point that does something"/> </httpunit-session> </testcase>
Jameleon does not require the use of fragments, but it is recommended for testcases that test the same functionality but in different ways. Getting to the state where the functionality can be tested is most of the work and the grouping of the function points that get to that point can be put into a fragment for inclusion into the actual test cases. | http://jameleon.sourceforge.net/xmlBasics.html | 13 |
62 | When men are arrived at the goal, they should not turn back. - Plutarch
Table of Contents
If an explorer were to step onto the surface of Mercury, he would discover a world resembling lunar terrain. Mercury's rolling, dust-covered hills have been eroded from the constant bombardment of meteorites. Fault-cliffs rise for several kilometers in height and extend for hundreds of kilometers. Craters dot the surface. The explorer would notice that the Sun appears two and a half times larger than on Earth; however, the sky is always black because Mercury has virtually no atmosphere to cause scattering of light. As the explorer gazes out into space, he might see two bright stars. One appearing as cream colored Venus and the other as blue colored Earth.
Until Mariner 10, little was known about Mercury because of the difficulty in observing it from Earth telescopes. At maximum elongation it is only 28 degrees from the Sun as seen from Earth. Because of this, it can only be viewed during daylight hours or just prior to sunrise or after sunset. When observed at dawn or dusk, Mercury is so low on the horizon that the light must pass through 10 times the amount of Earth's atmosphere than it would if Mercury was directly overhead.
During the 1880's, Giovanni Schiaparelli drew a sketch showing faint features on Mercury. He determined that Mercury must be tidally locked to the Sun, just as the Moon is tidally locked to Earth. In 1962, radio astronomers looked at radio emissions from Mercury and determined that the dark side was too warm to be tidally locked. It was expected to be much colder if it always faced away from the Sun. In 1965, Pettengill and Dyce determined Mercury's period of rotation to be 59 +- 5 days based upon radar observations. Later in 1971, Goldstein refined the rotation period to be 58.65 +- 0.25 days using radar observations. After close observation by the Mariner 10 spacecraft, the period was determined to be 58.646 +- 0.005 days.
Although Mercury is not tidally locked to the Sun, its rotational period is tidally coupled to its orbital period. Mercury rotates one and a half times during each orbit. Because of this 3:2 resonance, a day on Mercury (sun rise to sun rise) is 176 Earth days long as shown by the following diagram.
During Mercury's distant past, its period of rotation may have been faster. Scientists speculate that its rotation could have been as rapid as 8 hours, but over millions of years it was slowly despun by solar tides. A model of this process shows that such a despinning would take 109 years and would have raised the interior temperature by 100 degrees Kelvin.
Most of the scientific findings about Mercury comes from the Mariner 10 spacecraft which was launched on November 3, 1973. It flew past the planet on March 29, 1974 at a distance of 705 kilometers from the surface. On September 21, 1974 it flew past Mercury for the second time and on March 16, 1975 for the third time. During these visits, over 2,700 pictures were taken, covering 45% of Mercury's surface. Up until this time, scientists did not suspect that Mercury would have a magnetic field. They thought that because Mercury is small, its core would have solidified long ago. The presence of a magnetic field indicates that a planet has an iron core that is at least partially molten. Magnetic fields are generated from the rotation of a conductive molten core and is known as the dynamo effect.
Mariner 10 showed that Mercury has a magnetic field that is 1% as strong as Earth's. This magnet field is inclined 7 degrees to Mercury's axis of rotation and produces a magnetosphere around the planet. The source of the magnetic field is unknown. It might be produced from a partially molten iron core in the planet's interior. Another source of the field might be from remnant magnetization of iron-bearing rocks which were magnetized when the planet had a strong magnetic field during its younger years. As the planet cooled and solidified remnant magnetization was retained.
Even before Mariner 10, Mercury was known to have a high density. Its density is 5.44 g/cm3 which is comparable to Earth's 5.52g/cm3 density. In an uncompressed state, Mercury's density is 5.5 g/cm3 where Earth's is only 4.0 g/cm3. This high density indicates that the planet is 60 to 70 percent by weight metal, and 30 percent by weight silicate. This gives a core radius of 75% of the planet radius and a core volume of 42% of the planet's volume.
Surface of MercuryThe pictures returned from the Mariner 10 spacecraft showed a world that resembles the moon. It is pocked with craters, contains hugh multi-ring basins, and many lava flows. The craters range in size from 100 meters (the smallest resolvable feature on Mariner 10 images) to 1,300 kilometers. They are in various stages of preservation. Some are young with sharp rims and bright rays extending from them. Others are highly degraded, with rims that have been smoothed from the bombardment of meteorites. The largest crater on Mercury is the Caloris basin. A basin was defined by Hartmann and Kuiper (1962) as a "large circular depression with distinctive concentric rings and radial lineaments." Others consider any crater larger than 200 kilometers a basin. The Caloris basin is 1,300 kilometers in diameter, and was probably caused by a projectile larger than 100 kilometers in size. The impact produced concentric mountain rings three kilometers high and sent ejecta 600 to 800 kilometers across the planet. (Another good example of a basin showing concentric rings is the Valhalla region on Jupiter's moon Callisto.) The seismic waves produced from the Caloris impact focused onto the other side of the planet and produced a region of chaotic terrain. After the impact the crater was partially filled with lava flows.
Mercury is marked with great curved cliffs or lobate scarps that were apparently formed as Mercury cooled and shrank a few kilometers in size. This shrinking produced a wrinkled crust with scarps kilometers high and hundreds of kilometers long.
The majority of Mercury's surface is covered by plains. Much of it is old and heavily cratered, but some of the plains are less heavily cratered. Scientists have classified these plains as intercrater plains and smooth plains. Intercrater plains are less saturated with craters and the craters are less than 15 kilometers in diameter. These plains were probably formed as lava flows buried the older terrain. The smooth plains are younger still with fewer craters. Smooth plains can be found around the Caloris basin. In some areas patches of smooth lava can be seen filling craters.
Mercury's history of formation is similar to that of Earth's. About 4.5 billion years ago the planets formed. This was a time of intense bombardment for the planets as they scooped up matter and debris left around from the nebula that formed them. Early during this formation, Mercury probably differentiated into a dense metallic core, and a silicate crust. After the intense bombardment period, lava flowed across the surface and covered the older crust. By this time much of the debris had been swept up and Mercury entered a lighter bombardment period. During this period the intercrater plains formed. Then Mercury cooled. Its core contracted which in turn broke the crust and produced the prominent lobate scarps. During the third stage, lava flooded the lowlands and produced the smooth plains. During the fourth stage micrometeorite bombardment created a dusty surface also known as regolith. A few larger meteorites impacted the surface and left bright rayed craters. Other than the occasional collisions of a meteorites, Mercury's surface is no longer active and remains the same as it has for millions of years.
Could water exist on Mercury?It would appear that Mercury could not support water in any form. It has very little atmosphere and is blazing hot during the day, but in 1991 scientists at Caltech bounced radio waves off Mercury and found an unusual bright return from the north pole. The apparent brightening at the north pole could be explained by ice on or just under the surface. But is it possible for Mercury to have ice? Because Mercury's rotation is almost perpendicular to its orbital plain, the north pole always sees the sun just above the horizon. The insides of craters would never be exposed to the Sun and scientists suspect that they would remain colder than -161 C. These freezing temperatures could trap water outgassed from the planet, or ices brought to the planet from cometary impacts. These ice deposits might be covered with a layer of dust and would still show bright radar returns.
|Mass (Earth = 1)||5.5271e-02|
|Equatorial radius (km)||2,439.7|
|Equatorial radius (Earth = 1)||3.8252e-01|
|Mean density (gm/cm^3)||5.42|
|Mean distance from the Sun (km)||57,910,000|
|Mean distance from the Sun (Earth = 1)||0.3871|
|Rotational period (days)||58.6462|
|Orbital period (days)||87.969|
|Mean orbital velocity (km/sec)||47.88|
|Tilt of axis (degrees)||0.00|
|Orbital inclination (degrees)||7.004|
|Equatorial surface gravity (m/sec^2)||2.78|
|Equatorial escape velocity (km/sec)||4.25|
|Visual geometric albedo||0.10|
|Mean surface temperature||179°C|
|Maximum surface temperature||427°C|
|Minimum surface temperature||-173°C|
- MESSENGER Flies through Mercury's Magnetosphere.
- The Early Formation of Mercury.
- The Final Stages in Mercury's Formation.
Mercury Shows Its True Colors
MESSENGER's Wide Angle Camera (WAC), part of the Mercury Dual Imaging System (MDIS), is equipped with 11 narrow-band color filters. As the spacecraft receded from Mercury after making its closest approach on January 14, 2008, the WAC recorded a 3x3 mosaic covering part of the planet not previously seen by spacecraft. The color image shown here was generated by combining the mosaics taken through the WAC filters that transmit light at wavelengths of 1000 nanometers (infrared), 700 nanometers (far red), and 430 nanometers (violet). These three images were placed in the red, green, and blue channels, respectively, to create the visualization presented here. The human eye is sensitive only across the wavelength range from about 400 to 700 nanometers. Creating a false-color image in this way accentuates color differences on Mercury's surface that cannot be seen in black-and-white (single-color) images.
Color differences on Mercury are subtle, but they reveal important information about the nature of the planet's surface material. A number of bright spots with a bluish tinge are visible in this image. These are relatively recent impact craters. Some of the bright craters have bright streaks (called "rays" by planetary scientists) emanating from them. Bright features such as these are caused by the presence of freshly crushed rock material that was excavated and deposited during the highly energetic collision of a meteoroid with Mercury to form an impact crater. The large circular light-colored area in the upper right of the image is the interior of the Caloris basin. Mariner 10 viewed only the eastern (right) portion of this enormous impact basin, under lighting conditions that emphasized shadows and elevation differences rather than brightness and color differences. MESSENGER has revealed that Caloris is filled with smooth plains that are brighter than the surrounding terrain, hinting at a compositional contrast between these geologic units. The interior of Caloris also harbors several unusual dark-rimmed craters, which are visible in this image. The MESSENGER science team is working with the 11-color images in order to gain a better understanding of what minerals are present in these rocks of Mercury's crust.
The Interior of Mercury
Most of what is known about the internal structure of Mercury comes from data acquired by the Mariner 10 spacecraft that flew past the planet in 1973 and 1974. Mercury is about a third of the size of Earth, yet its density is comparable to that of Earth. This indicates that Mercury has a large core roughly the size of Earth's moon or about 75% of the planet's radius. The core is likely composed of 60 to 70% iron by mass. Mariner 10's measurements of the planet reveals a dipolar magnetic field possibly produced by a partially molten core. A solid rocky mantle surrounds the core with a thin crust of about 100 kilometers. (Copyright Calvin J. Hamilton)
Caloris Basin—in Color!
This false-color image of Mercury, recently published in Science magazine, shows the great Caloris impact basin, visible in this image as a large, circular, orange feature in the center of the picture. The contrast between the colors of the Caloris basin floor and those of the surrounding plains indicate that the composition of Mercury's surface is variable. Many additional geological features with intriguing color signatures can be identified in this image. For example, the bright orange spots just inside the rim of Caloris basin are thought to mark the location of volcanic features, such as the volcano shown in image PIA10942. MESSENGER Science Team members are studying these regional color variations in detail, to determine the different mineral compositions of Mercury's surface and to understand the geologic processes that have acted on it. Images taken through the 11 different WAC color filters were used to create this false-color image. The 11 different color images were compared and contrasted using statistical methods to isolate and enhance subtle color differences on Mercury's surface. (Courtesy NASA/Johns Hopkins University Applied Physics Laboratory/Arizona State)
MESSENGER Discovers Volcanoes on Mercury
As reported in the July 4, 2008 issue of Science magazine, volcanoes have been discovered on Mercury's surface from images acquired during MESSENGER's first Mercury flyby. This image shows the largest feature identified as a volcano in the upper center of the scene. The volcano has a central kidney-shaped depression, which is the vent, and a broad smooth dome surrounding the vent. The volcano is located just inside the rim of the Caloris impact basin. The rim of the basin is marked with hills and mountains, as visible in this image. The role of volcanism in Mercury's history had been previously debated, but MESSENGER's discovery of the first identified volcanoes on Mercury's surface shows that volcanism was active in the distant past on the innermost planet. (Courtesy NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington)
Mercury - in Color!
One week ago, the MESSENGER spacecraft transmitted to Earth the first high-resolution image of Mercury by a spacecraft in over 30 years, since the three Mercury flybys of Mariner 10 in 1974 and 1975. MESSENGER's Wide Angle Camera (WAC), part of the Mercury Dual Imaging System (MDIS), is equipped with 11 narrow-band color filters, in contrast to the two visible-light filters and one ultraviolet filter that were on Mariner 10's vidicon camera. By combining images taken through different filters in the visible and infrared, the MESSENGER data allow Mercury to be seen in a variety of high-resolution color views not previously possible. MESSENGER's eyes can see far beyond the color range of the human eye, and the colors seen in the accompanying image are somewhat different from what a human would see.
The color image was generated by combining three separate images taken through WAC filters sensitive to light in different wavelengths; filters that transmit light with wavelengths of 1000, 700, and 430 nanometers (infrared, far red, and violet, respectively) were placed in the red, green, and blue channels, respectively, to create this image. The human eye is sensitive across only the wavelength range 400 to 700 nanometers. Creating a false-color image in this way accentuates color differences on Mercury's surface that cannot be seen in the single-filter, black-and-white image released last week.
This visible-infrared image shows an incoming view of Mercury, about 80 minutes before MESSENGER's closest pass of the planet on January 14, 2008, from a distance of about 27,000 kilometers (17,000 miles).
Mercury's Complex Cratering History
On January 14, 2008, the MESSENGER spacecraft observed about half of the hemisphere not seen by Mariner 10. These images, mosaicked together by the MESSENGER team, were taken by the Narrow Angle Camera (NAC), part of the Mercury Dual Imaging System (MDIS) instrument, about 20 minutes after MESSENGER's closest approach to Mercury (2:04 pm EST), when the spacecraft was at a distance of about 5,000 kilometers (about 3,100 miles). The image shows features as small as 400 meters (0.25 miles) in size and is about 370 kilometers (230 miles) across.
The image shows part of a large, fresh crater with secondary crater chains located near Mercury's equator on the side of the planet newly imaged by MESSENGER. Large, flat-floored craters often have terraced rims from post-impact collapse of their newly formed walls. The hundreds of secondary impactors that are excavated from the planet's surface by the incoming object create long, linear crater chains radial to the main crater. These chains, in addition to the rest of the ejecta blanket, create the complicated, hilly terrain surrounding the primary crater. By counting craters on the ejecta blanket that have formed since the impact event, the age of the crater can be estimated. This count can then be compared with a similar count for the crater floor to determine whether any material has partially filled the crater since its formation. With their large size and production of abundant secondary craters, these flat-floored craters both illuminate and confound the study of the geological history of Mercury.
Looking Toward the South Pole of Mercury
On January 14, 2008, the MESSENGER spacecraft passed 200 kilometers (124 miles) above the surface of Mercury and snapped the first pictures of a side of Mercury not previously seen by spacecraft. This image shows that previously unseen side, with a view looking toward Mercury's south pole. The southern limb of the planet can be seen in the bottom right of the image. The bottom left of the image shows the transition from the sunlit, day side of Mercury to the dark, night side of the planet, a transition line known as the terminator. In the region near the terminator, the sun shines on the surface at a low angle, causing the rims of craters and other elevated surface features to cast long shadows, accentuating height differences in the image.
This image was acquired about 98 minutes after MESSENGER's closest approach
to Mercury, when the spacecraft was at a distance of about 33,000 kilometers
MESSENGER Looks to the North
As MESSENGER sped by Mercury on January 14, 2008, the Narrow Angle Camera (NAC) of the Mercury Dual Imaging System (MDIS) captured this shot looking toward Mercury's north pole. The surface shown in this image is from the side of Mercury not previously seen by spacecraft. The top right of this image shows the limb of the planet, which transitions into the terminator (the line between the sunlit, day side and the dark, night side) on the top left of the image. Near the terminator, the Sun illuminates surface features at a low angle, casting long shadows and causing height differences of the surface to appear more prominent in this region.
It is interesting to compare MESSENGER's view to the north with the image looking toward the south pole, released on January 21. Comparing these two images, it can be seen that the terrain near the south pole is more heavily cratered while some of the region near the north pole shows less cratered, smooth plains material, consistent with the general observations of the poles made by Mariner 10. MESSENGER acquired over 1200 images of Mercury's surface during its flyby, and the MESSENGER team is busy examining all of those images in detail, to understand the geologic history of the planet as a whole, from pole to pole.
Mariner 10 Outgoing Color Image of Mercury
This mosaic of Mercury was created from more than 140 images taken by the Mariner 10 spacecraft as it flew past the innermost planet on March 29, 1974. Mariner 10's trajectory brought the spacecraft across the dark hemisphere of Mercury. The images were acquired after the spacecraft exited Mercury's shadow. The color data is from more distant global views. (Copyright Ted Stryk)
MESSENGER Views Mercury's Horizon
As the MESSENGER spacecraft drew closer to Mercury for its historic first flyby, the spacecraft's Narrow Angle Camera (NAC) on the Mercury Dual Imaging System (MDIS) acquired an image mosaic of the sunlit portion of the planet. This image is one of those mosaic frames and was acquired on January 14, 2008, 18:10 UTC, when the spacecraft was about 18,000 kilometers (11,000 miles) from the surface of Mercury, about 55 minutes before MESSENGER's closest approach to the planet.
The image shows a variety of surface textures, including smooth plains at the center of the image, many impact craters (some with central peaks), and rough material that appears to have been ejected from the large crater to the lower right. This large 200-kilometer-wide (about 120 miles) crater was seen in less detail by Mariner 10 more than three decades ago and was named Sholem Aleichem for the Yiddish writer. In this MESSENGER image, it can be seen that the plains deposits filling the crater's interior have been deformed by linear ridges. The shadowed area on the right of the image is the day-night boundary, known as the terminator. Altogether, MESSENGER acquired over 1200 images of Mercury, which the science team members are now examining in detail to learn about the history and evolution of the innermost planet.
"The Spider" - Radial Troughs within Caloris
The Narrow Angle Camera of the Mercury Dual Imaging System (MDIS) on the MESSENGER spacecraft obtained high-resolution images of the floor of the Caloris basin on January 14, 2008. Near the center of the basin, an area unseen by Mariner 10, this remarkable feature - nicknamed "the spider" by the science team - was revealed. A set of troughs radiates outward in a geometry unlike anything seen by Mariner 10. The radial troughs are interpreted to be the result of extension (breaking apart) of the floor materials that filled the Caloris basin after its formation. Other troughs near the center form a polygonal pattern. This type of polygonal pattern of troughs is also seen along the interior margin of the Caloris basin. An impact crater about 40 km (~25 miles) in diameter appears to be centered on "the spider." The straight-line segments of the crater walls may have been influenced by preexisting extensional troughs, but some of the troughs may have formed at the time that the crater was excavated. (Courtesy NASA/JHUAPL)
MESSENGER Reveals Mercury's Geological History
Shortly following MESSENGER's closest approach to Mercury on January 14, 2008, the spacecraft's Narrow Angle Camera (NAC) on the Mercury Dual Imaging System (MDIS) instrument acquired this image as part of a mosaic that covers much of the sunlit portion of the hemisphere not viewed by Mariner 10. Images such as this one can be read in terms of a sequence of geological events and provide insight into the relative timing of processes that have acted on Mercury's surface in the past.
The double-ringed crater pictured in the lower left of this image appears to be filled with smooth plains material, perhaps volcanic in nature. This crater was subsequently disrupted by the formation of a prominent scarp (cliff), the surface expression of a major crustal fault system, that runs alongside part of its northern rim and may have led to the uplift seen across a portion of the crater's floor. A smaller crater in the lower right of the image has also been cut by the scarp, showing that the fault beneath the scarp was active after both of these craters had formed. The MESSENGER team is working to combine inferences about the timing of events gained from this image with similar information from the hundreds of other images acquired by MESSENGER to extend and refine the geological history of Mercury previously defined on the basis only of Mariner 10 images.
Ridges and Cliffs on Mercury's Surface
A complex history of geological evolution is recorded in this frame from the Narrow Angle Camera (NAC), part of the Mercury Dual Imaging System (MDIS) instrument, taken during MESSENGER's close flyby of Mercury on January 14, 2008. Part of an old, large crater occupies most of the lower left portion of the frame. An arrangement of ridges and cliffs in the shape of a "Y" crosses the crater's floor. The shadows defining the ridges are cast on the floor of the crater by the Sun shining from the right, indicating a descending stair-step of plains. The main, right-hand branch of the "Y" crosses the crater floor, the crater rim, and continues off the top edge of the picture; it appears to be a classic "lobate scarp" (irregularly shaped cliff) common in all areas of Mercury imaged so far. These lobate scarps were formed during a period when Mercury's crust was contracting as the planet cooled. In contrast, the branch of the Y to the left ends at the crater rim and is restricted to the floor of the crater. Both it and the lighter-colored ridge that extends downward from it resemble "wrinkle ridges" that are common on the large volcanic plains, or "maria," on the Moon. The MESSENGER science team is studying what features like these reveal about the interior cooling history of Mercury.
Ghostly remnants of a few craters are seen on the right side of this image, possibly indicating that once-pristine, bowl-shaped craters (like those on the large crater's floor) have been subsequently flooded by volcanism or some other plains-forming process.
Detailed Close-up of Mercury's Previously Unseen Surface
This scene was imaged by MESSENGER's Narrow Angle Camera (NAC) on the Mercury Dual Imaging System (MDIS) during the spacecraft's flyby of Mercury on January 14, 2008. The scene is part of a mosaic that covers a portion of the hemisphere not viewed by Mariner 10 during any of its three flybys (1974-1975). The surface of Mercury is revealed at a resolution of about 250 meters/pixel (about 820 feet/pixel). For this image, the Sun is illuminating the scene from the top and north is to the left.
The outer diameter of the large double ring crater at the center of the scene is about 260 km (about 160 miles). The crater appears to be filled with smooth plains material that may be volcanic in nature. Multiple chains of smaller secondary craters are also seen extending radially outward from the double ring crater. Double or multiple rings form in craters with very large diameters, often referred to as impact basins. On Mercury, double ring basins begin to form when the crater diameter exceeds about 200 km (about 125 miles); at such an onset diameter the inner rings are typically low, partial, or discontinuous. The transition diameter at which craters begin to form rings is not the same on all bodies and, although it depends primarily on the surface gravity of the planet or moon, the transition diameter can also reveal important information about the physical characteristics of surface materials. Studying impact craters, such as this one, in the more than 1200 images returned from this flyby will provide clues to the physical properties of Mercury's surface and its geological history.
Hills of Mercury
"Weird terrain" best describes this hilly, lineated region of Mercury. This area is at the antipodal point from the large Caloris basin. The shock wave produced by the Caloris impact was reflected and focused to this antipodal point, thus jumbling the crust and breaking it into a series of complex blocks. The area covered is about 100 kilometers (62 miles) on a side. (Copyright Calvin J. Hamilton; FDS 27370)
Caloris Basin Floor
This image is a high resolution view of the Caloris Basin shown in the previous image. It shows ridges and fractures that increase in size towards the center of the basin (upper left). (Copyright Calvin J. Hamilton; FDS 126)
Bright Rayed Craters
This image shows two prominent craters (upper right) with bright halos on Mercury. The craters are about 40 kilometers (25 miles) in diameter. The halos and rays cover other features on the surface indicating that they are some of the youngest on Mercury. (Copyright Calvin J. Hamilton; FDS 275)
Large Faults on Mercury
This Mariner 10 image shows Santa Maria Rupes, the sinuous dark feature running through the crater at the center of this image. Many such features were discovered in the Mariner images of Mercury and are interpreted to be enormous thrust faults where part of the mercurian crust was pushed slightly over an adjacent part by compressional forces. The abundance and length of the thrust faults indicate that the radius of Mercury decreased by 1-2 kilometers (.6 - 1.2 miles) after the solidification and impact cratering of the surface. This volume change probably was due to the cooling of the planet, following the formation of a metallic core three-fourths the size of the planet. North is towards the top and is 200 kilometers (120 miles) across. (© Copyright 1998 by Calvin J. Hamilton; FDS 27448)
This is an image of a 450 kilometer (280 mile) ridge called Antoniadi. It travels along the right edge of the image, and transects a large 80 kilometer (50 mile) crater about half way in between. It crosses smooth plains to the north and intercrater plains to the south [Strom et al., 1975]. (Copyright Calvin J. Hamilton)
Double Ring Basin
This image shows a double-ring basin which is 200 kilometers (120 miles) in diameter. The floor contains smooth plains material. The inner ring basin is at a lower elevation than the outer ring. (Copyright Calvin J. Hamilton; FDS 27301)
Incoming View of Mercury
This photomosaic of Mercury was constructed from photos taken by Mariner 10 six hours before the spacecraft flew past the planet on March 29, 1974. These images were taken from a distance of 5,380,000 kilometers (3,340,000 miles). (Courtesy USGS, and NASA)
This two image (FDS 26850, 26856) mosaic of Mercury was constructed from photos taken by Mariner 10 a few hours before the spacecraft's closest and first encounter with the planet on March 29, 1974. (Copyright Calvin J. Hamilton)
This mosaic shows the Caloris Basin (located half-way in shadow on the morning terminator). Caloris is Latin for heat and the basin is named this because it is near the subsolar point (the point closest to the sun) when Mercury is at aphelion. Caloris basin is 1,300 kilometers (800 miles) in diameter and is the largest know structure on Mercury. It was formed from an impact of a projectile with asteroid dimensions. The interior floor of the basin contains smooth plains but is highly ridged and fractured. North is towards the top of this image. (Copyright Calvin J. Hamilton; FDS 188-199)
Davies, M. E., S. E. Dwornik, D. E. Gault, and R. G. Strom. Atlas of Mercury. NASA SP-423. Washington, D.C.: U.S. Government Printing Office, 1978.
Mariner 10 Preliminary Science Report. Science, 185:141-180, 1974.
Mariner 10 Imaging Science Final Report. Journal of Geophysical Research, 80(17):2341-2514, 1975.
Strom, Robert G. et al. "Tectonism and Volcanism on Mercury." Journal of Geophysical Research, 80(17):2478-2507, 1975.
Trask, Newell J. and John E. Guest. "Preliminary Geologic Terrain Map of Mercury." Journal of Geophysical Research, 80(17):2461-2477, 1975. | http://www.solarviews.com/eng/mercury.htm | 13 |
181 | Lecture Examples - Physics 111.6 (02)
(Lecture Examples in pdf
- A car is moving at constant speed v around a flat, circular track of radius R. It takes a time T for the car to complete one lap. The acceleration, a, of the car is given by:
The (constant) speed of the car is:
Calculate the speed v and the radius of the circular track, R, if a = 7.85 m/s2 and T = 15.9 s.
ANS: 19.9 m/s, 50.4 m
- On a clear October day two students take a three-hour automobile trip to enjoy the fall foliage. In the first two hours they travel 100 km at a constant speed. In the third hour they travel another 80 km at a different constant speed. What is the average speed for each segment and for the entire trip?
ANS: 50 km/h, 80 km/h, 60 km/h
- (Example 2.8) The spacecraft shown in Figure 2.14a is travelling with a velocity of +3250 m/s. Suddenly the retrorocket is fired, and the spacecraft begins to slow down with an acceleration whose magnitude is 10.0 m/s2. What is the velocity of the spacecraft when the displacement of the craft is +215 km, relative to the point where the retrorocket began firing?
ANS: ±2500 m/s
- A motorcycle that is stopped at a traffic light accelerates at 4.20 m/s2 as soon as the light turns green. Just then, a car going 54.0 km/h passes the motorcycle. The car continues at that speed. How long after the light has changed will the motorcycle overtake the car and what is the speed of the motorcycle then, assuming it accelerated at 4.20 m/s2 throughout that time?
ANS: 7.14 s, 30.0 m/s
- A stone is thrown vertically from the roof of a building. It passes a window 14.0 m below the roof with a speed of 22.0 m/s and hits the ground 2.80 s after it was thrown. Determine the initial velocity of the stone and the height of the building.
ANS: 14.5 m/s DOWN, 79.1 m
- A person stands 12.0 m from a building whose roof is 60.0 m above the
person's eyes. (a) Calculate the distance from the person's eyes to the
roof. (b) Calculate the angle of the person's line of sight to the roof.
ANS: (a) 61.2 m, (b) 78.7°
- An airplane flies horizontally on a southwesterly heading a distance of 250 km. It
then flies 400 km due North. Calculate the distance of the plane from
its point of departure and the direction of its final destination from the
point of departure.
ANS: 285 m @ 51.6° North of West
- (Example 3.3) Figure 3.9 shows an airplane moving horizontally with a constant velocity of +115 m/s at an altitude of 1050 m. The directions to the right and upward have been chosen as the positive directions. The plane releases a 'care package' that falls to the ground along a curved trajectory. Ignoring air resistance, determine the time required for the package to hit the ground. (Example 3.4) Find the speed of the package and the direction of the velocity vector just before the package hits the ground. How far does the package travel horizontally during its fall?
ANS: 14.6 s, 184 m/s at 51.3° below horizontal, 1680 m
- A boy wants to throw a ball over a fence that is 15.0 m high and 6.00 m away. At the instant when the ball leaves the boy's hand, it is 1.00 m aboveground. What must be the initial velocity of the ball so that it will
be moving horizontally when it clears the fence? (i.e. Want minimum values for both launch angle and
ANS: 17.0 m/s at 78° above horizontal
- How far will a stone travel over level ground if it is thrown upward at an angle of 30.0° with respect to the horizontal and with a speed of 12.0 m/s?
ANS: 12.7 m
- (Example 4.1) Two people are pushing a stalled car, as Figure 4.5a indicates. The mass of the car is 1850 kg. One person applies a force of 275 N to the car, while the other applies a force of 395 N. Both forces act in the same direction. A third force of 560 N also acts on the car, but in a direction opposite to that in which the people are pushing. This force arises because of friction and the extent to which the pavement opposes the motion of the tires. Find the car's acceleration.
ANS: 0.059 m/s2
- A furniture van has a smooth ramp for making deliveries. The ramp makes an angle
with the horizontal. A large crate of mass m is placed at the top of the ramp. Assuming the ramp is a frictionless plane, what is the acceleration of the crate as it moves down the ramp?
ANS: -g sin
- A 68.0 kg passenger rides in an elevator that is accelerating upward at 1.00 m/s2 because of external forces. What is the force exerted by the passenger on the floor of the elevator?
ANS: 740 N DOWN
- (Example 4.10) A sled is travelling at 4.00 m/s along a horizontal stretch of snow, as Figure 4.24a illustrates. The coefficient of kinetic friction is µk = 0.0500. How far does the sled go before stopping? (Example 4.9) The coefficient of static friction is µs = 0.350. The sled and its rider have a total mass of 38.0 kg. Determine the horizontal force needed to get the sled barely moving again after it has stopped.
ANS: 16.3 m, 130 N
- A 4.00 kg bag of potatoes is held by a string. If the tension in the string is 39.2 N, what is the state of motion of the bag? What should the tension in the string be so that the bag accelerates upward at 1.80 m/s2?
ANS: at rest, 46.4 N
- Suppose that a block of mass M on an inclined plane is joined to a mass m by a cord over a pulley. The block slides on a frictionless surface and the effects of the pulley are negligible. What is the acceleration of the block if the surface is inclined at 20.0° and m = ½M.
ANS: 1.0 m/s2 up the plane
- In the diagram, find the angle and the mass M.
ANS: 38.7°, 30.4 kg
- At the Six Flags amusement park near Atlanta, the Wheelie carries passengers in a circular path with a radius of 7.70 m. The ride makes a complete rotation every 4.00 s. (a) What is a passenger's speed due to the circular motion? (b) What acceleration does a passenger experience?
ANS: 12.1 m/s, 19.0 m/s2
- A student ties a 0.0600 kg lead fishing weight to the end of a piece of string and whirls it around in a horizontal circle. If the radius of the circle is 0.300 m and the object moves with a speed of 2.00 m/s, what is the horizontal component of force that directs the lead weight toward the centre of the circle? What is the tension in the string?
ANS: 0.80 N, 0.99 N
- A satellite is placed into a circular equatorial orbit at a height of 6.37 × 106 m above the surface of the Earth. Calculate the period and orbital velocity of the satellite. What is the acceleration due to gravity experienced by the satellite?
ANS: 3.96 h, 5.60 × 103 m/s, 2.47 m/s2
- (Example 5.11) What is the height above the Earth's surface at which all synchronous satellites (regardless of mass) must be placed in orbit?
ANS: 3.59 × 107 m
- A mass of 5.00 kg is given a push along a horizontal plane so that its initial speed is 8.00 m/s. The coefficient of kinetic friction between the plane and mass is 0.400. How far will the mass slide before it comes to rest?
ANS: 8.16 m
- A student accidentally knocks a plant off a window sill, and it falls from rest to the ground 5.27 m below. Use the principle of conservation of mechanical energy to determine its speed just before it strikes the ground.
Ignore any effects due to air resistance.
ANS: 10.2 m/s
- A 2.00 kg mass slides down a frictionless plane that makes an angle of 30.0° with the horizontal. The mass starts from rest. What is its speed after it has slipped a distance of 3.00 m
along the plane?
ANS: 5.42 m/s
- The heart may be regarded as an intermittent pump that forces about 70.0 cm3 of blood into the 1.00 cm radius aorta about 75 times a minute. Measurements show that the average force with which the blood is pushed into the aorta is about 5.00 N. What is the approximate power used in moving the blood to the aorta?
ANS: 1.39 W
- Using the following data, determine the average force on a baseball hit by a bat. The baseball has a mass of 0.140 kg and an initial speed of 30.0 m/s. It rebounds from the bat with a speed of 40.0 m/s in the opposite direction and is in contact with the bat for 0.00200 s.
ANS: 4900 N
- A 60.0 kg ice skater is standing at rest on a frozen lake. The friction between his skates and the surface of the ice is negligible. If he throws a 2.00 kg block of ice horizontally with a velocity of 12.0 m/s, what is his recoil velocity?
ANS: -0.40 m/s
- In a safety test of automobile equipment, two cars of unequal mass undergo a head-on collision in which the stick together after the collision. A Buick Park Avenue with a mass of 1660 kg and an initial velocity of 8.00 km/h strikes an 830 kg Geo Metro with a velocity of 10.0 km/h toward the first. (a) What is the velocity of the combination immediately after the collision? (b) How do the accelerations of the two cars during collision compare?
ANS: +2.0 km/h, aGeo = -2 aBuick
- The ballistic pendulum is a simple device used to measure the velocity of a bullet. A block of wood of mass 1.50 kg, suspended from group of light strings, is initially at rest when a bullet of mass 0.0100 kg is fired horizontally into the wood. The bullet embeds itself in the block, which then swings in the direction of the projectile's velocity, attaining a height of 0.350 m relative to its initial position. What was the bullet's initial speed?
ANS: 395 m/s
- A railroad boxcar of mass m1 is initially in motion to the right along a straight track, which we denote as the x axis with the positive direction to the right. The boxcar collides with a stationary boxcar of mass m2 on the same track. The ratio of the masses of the two cars is m1/m2 = 1/2. After the collision, what is the velocity of each boxcar if the collision is elastic?
ANS: f1 = -o1/3,
f2 = 2vo1/3
- Two cars approach an intersection at right angles. Car A has a mass of 1000 kg and travels at 8.00 m/s North; car B has a mass of 600 kg and travels at 10.0 m/s East. Immediately following the collision car B is observed to move with a velocity of 6.00 m/s at 60.0° North of East. Find the velocity of car A just after the collision. Was the collision elastic?
ANS: 6.44 m/s at 49.3° N of E, inelastic since KEf = 31.5 kJ and KEo = 62 kJ
- An electric motor accelerates from rest to 500 rpm in 4.00 s. The output shaft of the motor has a radius of 0.0100 m and a pulley radius of 0.0400 is fitted to the shaft.
(a) Calculate the angular acceleration of the motor shaft while the motor is accelerating.
(b) Calculate the angular acceleration of the pulley.
(c) Through how many revolutions does the pulley rotate while accelerating?
Once the motor has reached constant angular velocity:
(d) Calculate the speed of a point on the rim of the motor shaft.
(e) Calculate the speed of a point on the rim of the pulley.
(f) Calculate the acceleration of a point on the rim of the pulley.
ANS: 13.1 rad/s2, 13.1 rad/s2, 16.7 rev, 0.524 m/s, 2.10 m/s, 110 m/s2
- A motorcycle whose wheels have a diameter of 60.0 cm approaches an intersection at a speed of 72.0 km/h. When the motorcycle is 50.0 m from the intersection, the traffic light turns red and the cyclist applies the brakes, decelerating uniformly. She comes to rest at the intersection. Find (a) the angular velocity of the wheels before the brakes are applied; (b) the angular acceleration of the wheels; (c) the angle through which each wheel turns during the time the cycle decelerates.
ANS: 66.7 rad/s, -13.3 rad/s2, 26.6 rev
- (Example 9.5) A bodybuilder holds a dumbbell of weight Wd as shown in Figure 9.8a. His arm is extended horizontally and weighs Wa = 31.0 N. The deltoid muscle is assumed to be the only muscle acting and is attached to the arm as shown. The maximum force M that the deltoid muscle can supply to keep the arm horizontal has a magnitude of 1840 N. Figure 9.8b shows the distances that locate where the various forces act on the arm. What is the heaviest dumbbell that can be held, and what are the horizontal and vertical force components, Sx and Sy that the shoulder joint applies to the arm?
ANS: Wd = 86.1 N, Sx = 1790 N, Sy = -297 N
- A sign weighing 400 N is suspended at the end of a 350 N uniform rod that is
hinged at the wall. What is the tension in a support cable that attaches the end
of the rod to the wall, if the cable makes an angle of 35.0° with the rod?
ANS: T = 1000 N
- A cylindrical winch of radius R and moment of inertia I is free to rotate without friction about an axis. A cord of negligible mass is wrapped around the winch and attached to bucket of mass m. When the bucket is released, it accelerates downward as the cord unwinds from the winch. Find the acceleration of the bucket.
- An ice skater starts spinning at a rate of 1.50 rev/s with arms extended. She then pulls her arms in close to her body, resulting in a decrease of her moment of inertia to three quarters of the initial value. What is the skater's final angular velocity?
ANS: 2.0 rev/s
- Four children, each of mass m = 30.0 kg, are on the edge of a merry-go-round of radius R = 2.00 m and mass M = 200 kg that is initially rotating with an angular velocity of 2.08 rad/s. The 4 children now make their way toward the centre of the merry-go-round. Find the angular velocity of the system when the children are 0.750 m from the centre. What is the kinetic energy of the system when the children are at the periphery and when they are 0.750 m from the centre of the merry-go-round?
ANS: 3.92 rad/s, KEo = 1900 J, KEf = 3590 J
- The position of an object relative to its equilibrium location is given by x = 0.40 cos(7.85 t), where x is in metres and t is in seconds. What are the amplitude of oscillation, the frequency, and the angular frequency? What is the velocity of the object at t = 0? What is the acceleration at t = 0?
ANS: 0.40 m, 1.25 Hz, 7.85 rad/s, 0, -24.7 m/s2
- A metal block is hung from a spring that obeys Hooke's Law. When the block is pulled down 12.0 cm from the equilibrium position and released from rest, it oscillates with a period of 0.750 s, passing through the equilibrium position with a speed of 1.00 m/s.
(a) What is the displacement and (b) what is the speed of the block 0.280 s after it is released?
ANS: -0.084 m, -0.71 m/s
- A mass is attached to a spring of spring constant 400 N/m. If this mass is displaced from equilibrium by 4.00 cm and released at t = 0, it oscillates at a frequency of 15.6 Hz. Write an expression for the displacement, velocity, and acceleration of this mass as a function of time and determine the value of the mass.
ANS: 0.0400 m cos(98.0 t), -3.92 m/s sin(98.0 t), -384 m/s2 cos (98.0 t), 0.042 kg
- A mass of 200 g on a frictionless horizontal surface is connected to a horizontal ideal spring of spring constant 250 N/m. The mass is displaced 2.00 cm and released. Calculate the kinetic energy and speed of the mass when it passes through the equilibrium position.
ANS: 0.0500 J, 0.707 m/s
- A 200 g object on a frictionless plane inclined at 30.0° with the horizontal is pushed against a 250 N/m spring until the spring is compressed 5.00 cm. Calculate the speed of the object when it has travelled a distance of 25.0 cm along the plane.
ANS: 0.822 m/s
- A spring stretches by 0.150 m when a mass of 1.00 kg is suspended from its end. What mass should be attached to this spring so that the natural vibration frequency of the system will be 8.00 Hz?
ANS: 0.026 kg
- A 0.500 kg mass is supported by a spring. The system is set in vibration at its natural frequency of 4.00 Hz with an amplitude of 5.00 cm. Find the spring constant of the spring, the maximum speed of the mass, and the energy of the system.
ANS: 316 N/m, 1.26 m/s, 0.40 J
- A simple pendulum consists of a bob of mass 2.4 kg and a string of length L. What should the value of L be so that the period of the pendulum is 2.00 s
at a location where g = 9.80 m/s2?
ANS: 0.993 m
- The acceleration of gravity varies slightly over the surface of the Earth. If a pendulum has a period of 3.0000 s at a location where g = 9.803 m/s2 and a period of 3.0024 s at another location, what is g at this new location?
ANS: 9.787 m/s2
- A nurse administers medication in a saline solution to a patient by infusion into a vein in the patient's arm. The density of the solution is 1.00 × 103 kg/m3 and the gauge pressure inside the vein is 2.40 × 103 Pa. How high above the insertion point must the container by hung so that there is sufficient pressure to force the fluid into the patient?
ANS: 24.5 cm
- You can make a simple hydraulic lift by fitting a piston attached to a handle into a 3.00 cm diameter cylinder, which is connected to a larger cylinder of 24.0 cm diameter. If a 50.0 kg woman puts all her weight on the handle of the smaller piston, how much weight can be lifted by the larger one?
Assume both pistons are at the same height.
ANS: 3.14 × 104 N
- A block of Styrofoam floats on water with only 12% of its volume submerged. What is the average density of Styrofoam?
ANS: 120 kg/m3
- An object of mass
m and volume V is suspended from a string so that it is
half-submerged in a fluid of density
Determine the expression for the tension in the string.
ANS: T = mg - rgV/2
- An object has a weight of Wair in air (its true
weight) and an apparent weight of Wsub when completely
submerged in water. Determine the expression for the density of the object
in terms of Wair, Wsub, and the density of
(Wair/(Wair - Wsub))
- A horizontal pipe of 25.0 cm2 cross section carries water at a velocity of 3.00 m/s. The pipe feeds into a smaller pipe with a cross section of only 15.0 cm2. (a) What is the velocity of water in the smaller pipe? (b) Determine the pressure change that occurs on going from the larger diameter pipe to the smaller pipe.
ANS: 5.0 m/s, -8000 Pa
- A patient is given sucrose intravenously. Her venous gauge pressure is 18.0 mm Hg and the elevation difference between the intravenous needle and sucrose bottle is 0.80 m. If the rate of sucrose flow is to be 2.00 mL/min, what should the diameter of the 4.00 cm long needle be? Assume the density and viscosity of sucrose to be 1.06 × 103 kg/m3 and 2.084 × 10–3 N.s/m2, respectively.
ANS: 0.372 mm
- The diagram represents two snapshots of a wave on a rope. The snapshots were taken 0.100 s apart. We know that the wave was travelling to the right and that it moved by less than one wavelength between pictures. Find its (a) wavelength, (b) wave speed, and (c) frequency. (d) Write an expression for the rope's displacement from its equilibrium position as a function of position and time. The maximum displacement is 3.00 cm.
ANS: 2.00 m, 10.0 m/s, 5.00 Hz
- Consider a constant-power source that is emitting sound uniformly in all
directions. At a distance r1 from the source the sound intensity is
1.25 × 10-4 W/m2. Calculate the
corresponding intensity level in dB. At a distance r2 from the source the
intensity level is 87.0 dB. Calculate the corresponding sound
intensity. Calculate the ratio r2/r1.
ANS: 81.0 dB, 5.01 × 10-4 W/m2, 4.00
- A train is approaching a grade crossing at 80.0 km/h and sounds its horn, whose frequency is 320 Hz. What is the frequency of sound heard by a stationary observer at the grade crossing (a) as the train approaches; (b) as the train recedes? The speed of sound in air is 343 m/s.
ANS: 342 Hz, 300 Hz
- A ship transmits sonar pulses of 20.00 MHz at regular intervals while steaming due west. The sonar operator records a reflected signal from a stationary reef due west of the ship and notes that the frequency of the received signal is 20.15 MHz. The speed of sound in seawater is 5620 km/h (1560 m/s). Calculate the speed of the ship.
ANS: 21.0 km/h
- One string of a bass fiddle is 1.80 m long and has a fundamental frequency of resonance of 81 Hz when it is under a tension of 120 N. If the total length of the string (including the part wound about the tuning peg) is 2.10 m, what is the mass of the string? If the tension is increased to 140 N, what will the new
fundamental resonant frequency be?
ANS: 2.96 g, 87.5 Hz
- You are asked to construct a pipe that will resonate at room temperature at the following frequencies: 180 Hz and 540 Hz, and no other frequencies between 0 and 600 Hz. Describe the pipe and give its length. (speed of sound = 343 m/s)
ANS: 0.47 m long with one end open and one end closed
- Two point charges of +4.00 × 10–2 C and –6.00 × 10–2 C are 3.00 m apart. What is the magnitude and the nature of the electrostatic force between them?
ANS: 2.4 × 106 N, attractive
- Three charges are located along a straight line as shown. What is the net electrostatic force on the +3 µC charge?
ANS: -1.42 N (to left)
- Where should the +3 µC charge be placed in the following arrangement of charges so that it experiences no net electrostatic force?
ANS: x = 1.66 m
- Three charges q1 = +3.70 µC, q2 = –3.70 µC, and q3 = +4.80 µC are fixed at the corners of an equilateral triangle 3.00 × 10–2 m on a side. Find the magnitude and direction of the net force on charge q3 due to the other charges.
ANS: 178 N in +x direction (q3 is at the apex of the triangle and q1 and q2 are on the x-axis
- An electron is located in an electric field of 600 N/C. What is the force acting on the electron and what is its acceleration if it is free to move? If the field points in the positive x direction, what is the direction of the acceleration of the electron?
ANS: 9.61 × 10-17 N opposite to E, 1.05 × 1014 m/s2 in -x direction
- A charge of 4.00 µC is placed at x = 0, y = 20.0 cm and a charge of –2.00 µC is placed at x = 20.0 cm, y = 0. Determine the electric field at the origin.
ANS: 10.1 × 105 N/C at -63.4° to +x axis
- An alpha particle (helium nucleus) is accelerated in a cyclotron to an energy of 40.0 MeV (4.00 × 107 eV). Calculate the speed of this particle. What would be the speed of a proton whose energy is 40.0 MeV? (The mass of the alpha particle is approximately 6.64 × 10–27 kg and the mass of the proton is 1.673 × 10–27 kg.)
ANS: 4.39 × 107 m/s, 8.75 × 107 m/s
- A proton is released from rest in a uniform electric field of 500 V/m.
Calculate the speed of the proton after it has moved a distance of 0.500 m.
ANS: 2.19 × 105 m/s
- Charges of 16.0 and 24.0 µC are separated by 0.800 m. What is the electric field and the electric potential midway between the two charges?
ANS: 4.5 × 105 N/C toward 16 µC charge, 9.0 × 105 V
- What is the resistance of a resistor through which 8.00 × 104 C flow in one hour if the potential difference across it is 12.0 V?
- A piece of copper wire has a cross section of 4.00 mm2 and a length of 2.00 m.
The resistivity of
at 20.0 °C
and its temperature coefficient of resistivity is 0.00393/°C.
(a) What is the electric resistance of the wire at 20.0 °C?
(b) What is the potential difference across the wire when it carries a current of 10.0 A?
(c) Calculate the resistance at 40.0 °C.
ANS: 0.0086 , 0.086 V, 9.28
- (Example 19.3) A
60.0 W headlight is connected across a 12.0 V battery. Calculate the number of
electrons that flow through the headlight filament in one hour.
ANS: 1.13 ´
- An electric hair dryer provides a good example of electric resistance. A typical dryer designed to operate on a 120 V household circuit is rated at 1500 W. What is the resistance of the dryer?
- (Example 20.9) A 6.00 resistor and a 3.00
resistor are connected in series with a 12.0 V battery. Assuming that the battery contributes no resistance to the circuit, find
(a) the current
(b) the power dissipated in each resistor
(c) the total power delivered to the resistors by the battery.
ANS: 1.33 A, 10.6 W in 6 , 5.31 W in 3
, 15.9 W
- To adjust the light intensity from a desk lamp with an incandescent bulb, a person places a variable resistor R in series with the desk lamp. The lamp's bulb is rated 100 W at 110 V. What should be the range over which R can be varied so that the bulb can be operated between 40.0 W and 80.0 W?
ANS: 15.0 < R < 70.0
- A 10.0 V battery is connected to the parallel combination of an unknown resistance and a 5.00
resistor. The total power dissipated in the circuit is 45.0 W. Find the unknown resistance.
- A proton moving with a velocity of 6.00 × 106 m/s to the North enters a region where the magnetic field is 1.50 T and points directly up. Determine the required magnitude and direction of the electric field E that will allow the proton to move undeviated through this region.
ANS: 8.99 × 106 N/C West
- Singly ionized ions that have been accelerated through a potential of 800 V describe a circular trajectory of 16.0 cm radius in a magnetic field of 0.200 T. What is the mass of these atoms?
ANS: 1.03 × 10-25 kg
- A cylindrical container is 4.00 cm in diameter and 2.40 cm high. The container is initially empty. A woman looks into the container in such a way that she can just see the far edge of the bottom of the container. The container is now filled with an unknown transparent liquid. Without moving her head from her initial position, the woman can now see the middle of the bottom of the container. What is the refractive index of the unknown liquid? (Assume the refractive index of air to be 1.00.)
- (Example 26.2) A searchlight on a yacht is being used at night to illuminate a sunken chest. The chest is 3.30 m below the surface of the water and is a lateral distance of 2.00 m from the point at which the searchlight beam enters the water.
(a) At what angle of incidence should the searchlight be aimed?
(b) What is the apparent depth of the sunken chest?
ANS: 43.6°, 2.10 m
- An optical fibre (light pipe) is made of material with n = 1.70 and is given a protective coating with another material of index of refraction 1.25. What is the critical angle for total internal reflection?
- An object is located 9.00 cm in front of a converging lens (f = 6.00 cm). Determine the location of the image.
ANS: +18.0 cm
- A lens forms an erect image of an object twice the size of the object. The image appears 60.0 cm from the lens. Determine the object distance and focal length of the lens.
ANS: do = +30.0 cm, f = +60.0 cm
- A converging lens (f = 12.0 cm) is located 30.0 cm to the left of a diverging lens (f = –6.00 cm). A postage stamp is placed 36.0 cm to the left of the converging lens.
(a) Locate the final image of the stamp relative to the diverging lens.
(b) Calculate the overall magnification.
(c) Is the final image real or virtual, upright or inverted, larger or smaller?
ANS: -4.00 cm, -0.167, virtual, inverted, reduced
- A person goes to the optometrist, who prescribes corrective contact lenses of –40.0 cm focal length. With the aid of these
contacts, that person's far point is at infinity, and the near point is at 20.0 cm. What are the person's uncorrected far and near points?
ANS: 40.0 cm and 13.3 cm
- A biology student wishes to use a 6.00 cm focal length lens as a magnifier.
The student's near point is 25.0 cm.
(a) What is the magnification of the lens when used with a relaxed eye?
(b) What is the maximum magnification of the lens?
ANS: 4.17, 5.17
- You are given a 180 mm long tube with an objective lens of focal length 2.00 mm at one end and an eyepiece lens of focal length 30.0 mm at the other end. Where should an object be placed to use the tube and lenses as a microscope with maximum magnification?
Assume the user has a near point of 30.0 cm.
ANS: 2.03 mm from objective lens
- The overall magnification of an astronomical telescope is desired to be -20.0×. If an objective of 80.0 cm focal length is used:
(a) What must be the focal length of the eyepiece?
(b) What is the refractive power of the eyepiece lens in diopters?
(c) What is the overall length of the telescope when adjusted for use by the relaxed eye?
ANS: 4.00 cm, 25.0, n/a, 84.0 cm
- Two slits separated by 0.400 mm are illuminated with a monochromatic, coherent light source. The separation between the 0th and 1st order maxima of the interference pattern
detected on a screen 2.50 m from the slits is 1.20 mm. Find the wavelength of the incident light.
ANS: 192 nm
- The angle of deviation of light of 400 nm wavelength is 30.0° in second order.
(a) How many lines per centimetre are there on this grating?
(b) How many orders of the complete visible spectrum are produced by this grating?
(c) Are these visible spectra clearly separated in all orders?
ANS: 6250 /cm, 2
- A light source emits 1018 photons per second. If the wavelength of the emitted light is 600 nm, what is the power radiated?
ANS: 0.331 W
- The work function of sodium is 2.30 eV. What is the maximum wavelength of light that is able to release photoelectrons from a sodium surface?
ANS: 539 nm
- A 17.2 keV x-ray from molybdenum is Compton-scattered through an angle of 90.0°. What is the energy of the x-ray after scattering?
ANS: 16.6 keV
- What is the wavelength of the Balmer line that corresponds to the n = 5 to n = 2 transition?
ANS: 434 nm
- What is the minimum voltage required in an x-ray tube to produce photons whose wavelength is 0.100 nm?
ANS: 1.24 × 104 V
- Compute the binding energy per nucleon of 14C. (The atomic mass of 14C is 14.003242 u.)
ANS: 7.52 MeV/nucleon
- A sample of 131I (half life of 8 days) has an activity of 0.0500 Ci.
(1Ci = 3.70
(a) How many radioactive iodine nuclei does this sample contain?
(b) What will the activity of the sample be after 16 days?
(c) How many days will elapse before the activity of the sample has diminished to 0.00200 Ci?
ANS: 1.85 × 1015 nuclei, 0.0125 Ci, 37.3 days
- An archaeologist finds a bone that contains 5.00 g of carbon. The 14C counting rate from this 5.00 g of carbon is found to be 30.0 counts/min. If the 14C counting rate from natural carbon is
0.230 Bq per gram, how old is the bone? The half-life of 14C is
ANS: 7580 y
- Calculate the energy released in the initial step of the neutron-induced fission of 235U that
yields the unstable isotopes of 144Ba and 89Kr and three neutrons?
Atomic masses: 235U
= 235.043925 u; 144Ba = 143.922673 u;
= 88.917563 u.
ANS: 174 MeV | http://physics.usask.ca/~bzulkosk/phys111/sec02/sec02lecture_examples.htm | 13 |
60 | |Yale-New Haven Teachers Institute||Home|
Mary Elizabeth Jones
Use the relationship between wavelength, frequency and wave velocity to find one variable when two are given.
Illustrate the Doppler effect.
Analyze the role of noise as one type of pollution
Distinguish between noise and music.
Research the original scientists of math and sound.
Teach math concepts.
This unit is designed to be taught over one marking period to 6th grade math and science students. The unit is divided into three sections. The first section covers wave characteristics, sound velocity, pitch and frequency and the Doppler effect. The second section explores amplitude, sound pressure level, noise, and musical sounds. The last section focuses on persons involved in the origins of sound.
In order to make this unit meaningful to a math or science class, we must show a relationship between music, math and science. The Pythagorean Doctrine made the connection in this way. Geometry (math) as magnitude at rest. Astronomy (science) as magnitude in motion. Arithmetic as numbers absolute and music as numbers applied.
Students will discuss wave characteristics such as how sound waves travel and two types of wave motion. Students will learn to describe the transmission of sound through a medium and be able to recognize the relationship between amplitude, loudness and frequency of pitch.
Math skills will be taught using information compiled in science class.
Students will learn to apply specific formulas to solve problems. Opportunities will be provided for students to use math skills to measure, calculate, graph, and analyze data and complete mini-lab activities.
When is it music and when is it noise? Students will learn to recognize noise as a form of pollution and identify sounds that are considered noise both through frequency content and loudness. Students will be required to complete a report on the effects of noise on hearing and what can be done to protect their hearing.
Students will be required to research the original great scientists (such as Pythagoras and Thales) of sound and math along with their contributions. The research will be used to prepare a report.
Two types of waves will be discussed in this section; transverse waves and compressional waves. A transverse wave is one in which the vibrations are at right angles to the direction the wave is traveling (ex. Waves on a rope). A compressional wave is one in which the vibration is in the same direction as the wave is traveling (ex. Sound waves in the air).
Water waves are probably the easiest type of wave to visualize. If you have been in a boat, you know that approaching waves bump against the boat but do not carry the boat along with them. The boat just moves up and down as the waves pass by. Like the boat, the water molecules on the surface of the lake move up and down, but not forward. Only energy carried by the waves moves forward.
Waves are rhythmic disturbances that carry energy through matter or space.
Water waves transfer energy through the water. Earthquakes transfer energy in powerful shock waves that travel through Earth. Both types of waves travel through a medium. A medium is a material through which a wave can transfer energy. This medium may be a solid, a liquid, a gas or a combination of these. Radio waves and light waves, however, are types of waves that can travel without a medium.
Two types of wave motion can carry energy. Figure 1 shows how you can make a transverse wave by snapping the ends of a rope up and down while a friend holds one end. Figure 2 shows how a compressional wave should look.
Notice that as the wave moves, some of the coils are squeezed together just as you squeezed the ones on the end of the spring. The crowded area is called compression.
The compressed area then expands, spreading the coils apart creating a less dense area. This less dense area of the wave is called a rarefaction. Does the whole spring move? Tie a piece of string on one end of the coils and observe the motion. The string moves back and forth with the coils. Therefore, the matter in the medium does not move forward with the wave. Instead, the wave carries only the energy forward.
Transverse waves have wavelengths, frequencies, amplitudes and velocities. Compressional waves also have these characteristics. A wavelength in a compressional wave is made of one compressional, and one rarefaction as shown in Figure 3. Notice that one wavelength is the distance between two compressions or two rarefactions of the same wave. The frequency is the number of compressions that pass a place each second. If you repeatedly squeeze and release the end of the spring three times each second, you will produce a wave with a frequency of 3 Hz.
The high, almost dense points of a wave are called crests; the lowest points are called troughs. Waves are measured by their wavelength. Wavelength is the distance between a point on one wave and the identical point on the next wave, such as from crest to crest or trough to trough.
Do high-pitched sounds travel at a different speed than low-pitched sounds? Let’s ask this question differently. If you were at an outdoor band concert (without electronic amplification) and the conductor gave the downbeat to the band, would the sound of the piccolo get to you before or after the sound of the tuba?1
Your experience will help to tell you that if there were much of a difference in the arrival times between high and low pitches, not only would it be difficult to keep the performance together, but also it would sound quite different up close to the band than further away. In fact, sound in the normal audible range travels at a constant speed independent of pitch.
Most people cannot hear sound frequencies above 20,000 Hz The frequency of the human voice range that carries information extends from about 250 to about 2000 Hz in a normal conversation. Bats, however, can detect frequencies as high as 100,000 Hz. Ultrasound waves are used in sonar as well as in medical diagnosis and treatment. Sonar, or sound navigation ranging, is a method of using sound waves to estimate the distance to, size, shape and depth of underwater objects.
Sound must have a medium (liquid, gas or solid) through which to travel. It cannot travel through a vacuum. A vacuum is a space that is empty of everything, even air. If you put a ringing alarm clock into a jar and pump the air out of the jar, the sound of the ringing will decrease as you pump out the air. When most of the air molecules are out of the jar, not enough molecules remain to form sound waves and the ringing sound stops.
When two waves of the same frequency reach the same point, they may interfere constructively or destructively. If their amplitudes are both equal to A, the resultant amplitude may be anything from zero up to 2 A. The same is true of a wave that reflects back on itself after hitting a hard surface.2
Wave velocity can be determined by multiplying the wavelength and frequency. Wavelength is represented by the Greek letter lambda, ». If you know any two variables in an equation you can find the unknown variable. Velocity = wavelength x frequency.
For sound waves, the sound velocity does not change with frequency for a given medium.
Calculating the Frequency of a Wave
Problem: Earthquakes can produce three types of waves. One of these is a transverse wave called an s wave. A typical s wave travels at 5000m/s. Its wavelength is about 417 m.
What is its frequency?
Information Velocity, v = 5000m/s
Strategy Hint: Remember, Wavelength, »= 417m
Hz= 1/s, so m/s divided by
m = 1/s = 1 Hz
Unknown Information frequency (f)
Equation to use v = » x f
Solution3 v = » x f, so f = v/»
= (5000m/s)/(412m) = 12 Hz
Calculating Velocity of a Wave
Problem: A wave is generated in a wave pool at a water amusement park. The wavelength is 3.2 m.
The frequency of the wave is 0.60 Hz.
What is the velocity of the wave?
Information wavelength, »= 3.2 m
Strategy Hint: Another way frequency, f = 0.60 Hz
To express Hertz is 1/second,
Therefore, m x 1/s = m/s.
Unknown information velocity (v)
Equation to use v = » x f
Solution3 v = » x f = 3.2 m x 0.60Hz = 1.92 m/s
Activity 1. Transverse waves
Problem: Resonance: How can wave energy be stored?
- small Slinky
- stop watch
1. You and a partner should pull on each end of the slinky until it stretches about I meter.
2. Hold one end of the Slinky motionless and shake the other end to make the slinky vibrate in one segment transverse to its length.
3. Count the number of vibrations the spring makes in 10 seconds.
4. Make a second wave by moving the end of the spring from side to side twice as fast as before. Look for the spring to vibrate in two equal segments. Each segment will move in opposite directions.
5. Try to make the spring vibrate in three equal segments.
1. Draw pictures of the spring for each of the three forms of wave you made. How many transverse waves does each picture represent?
2. The spring can store energy when the wave is the right size to exactly “fit” onto the spring. That is, you produce a resonance. How many wavelengths fit onto the spring for each of the three forms of waves produced.
Conclude and Apply
3. If wave energy is to be stored in the spring, how must the length of the spring and the length of the wave compare?
4. Why could you store short wave energy in a long spring but are not able to store long wave energy in a short spring?
Answers to questions
1. Drawing should show
2. The first-the spring holds one half of a wave. The second-the spring holds two halves of a wave. The third-the spring holds three halves of a wave.
- One half of wave.
- One full wave
- One and a half waves
3. Wave energy can be stored in the spring if the spring is some whole number or half number of waves in length.
4. In order to store wave energy, the spring must be at least a half wavelength long.
Activity 2. Frequency of Sound Waves
Problem: What is the frequency of a musical note?
- plastic pipe
- rubber band
- metric ruler
1. Measure the length of the pipe and record it on the data table.
2. Stretch one end of the rubber band across the open end of the pipe and hold it firmly in place. Caution: Be careful not to release your grip on the ends of the rubber band.
3. Hold the rubber band close to your ear and pluck it.
4. Listen for a double note.
5. Slowly relax the tightness of the rubber band. Listen for one part of the double note to change and the other part to remain the same.
6. Continue to adjust the tightness until you hear only one note.
7. Exchange pipes with another group and repeat the experiment.
Data and Observation ñ sample data table
Sound Frequencies produced by an open Pipe
- Length of Pipe = 0.2m
- Length of wave = 0.4m
- Frequency of sound = 855Hz
1. The wavelength you obtained in step 6 is twice the length of the pipe. Calculate the wavelength.
2. Assume the velocity of sound to be 342 m/s. Use the equation frequency = velocity/wavelength to calculate the frequency of the note.
3. What was the wavelength and frequency of the sound waves in the second pipe?
Conclude and Apply
4. How does the length of a pipe compare with the frequency and wave?
length of the sound it can make? 5. A pipe organ uses pipes of different lengths to produce various notes. What other musical instrument uses lengths of pipe to produce musical notes?
6. If you listen closely, you can hear longer pipes produce certain higher frequency sounds. How is this possible?
Answers to questions.
1. Longest wavelength = 2 X pipe length
2. 32,200 cm/se = 805 Hz
3. Answers will vary. Wavelength will increase and frequency decrease as the pipe become longer.
4. The longer the pipe, the longer the wavelength and the lower the frequency.
5. All horns and woodwinds as well as the human voice use a vibrating air column.
A xylophone uses open pipes to amplify the sound of the vibrating bars.
6. A series of shorter waves will “fit” the pipe if their wavelengths are 1 X, 2/3 X, ½ X, 2/5 X……the length of the pipe.
A sound level meter, consisting of a microphone, an amplifier and a meter that reads in decibels, measures sound pressure levels. Sound pressure levels of a number of sounds are given in Table 2. Class exercise: students can obtain a feeling for different sound pressure levels by using a sound level meter.
Table 2 Typical Sound Levels
|Jets take off (60 Meters)||120 dB|
|Shout (1.5 meters)||100 dB|
|Heavy truck (15meters)||90 dB very noisy|
Automobile interior 70 dB noisy
|Normal conversation (1 meter)||60 dB|
|Office, classroom||50 dB moderates|
Bedroom at night 30 dB quiet
Broadcast studio 20 dB
Rustling leaves 10 dB barely
The word sound is used to describe two different things: (1) and auditory sensation in the ear and (2) the disturbance in a medium which can cause a sensation.
Think of all the sounds that you’ve heard since you awoke this morning. Did you hear a blaring alarm, honking horns, human voices, and lockers slamming? Your ears allow you to recognize these different sounds. These sounds all have one thing in common. The vibration of objects produces them all. The vibrations of your vocal cords produce voice. The energy produced by these vibrations is carried to your friend’s ear by sound waves traveling through a medium, air.
The speed of sound waves depends on the medium through which the wave travels and it temperature. Air is the most common medium you hear sound waves through, but sound waves can be transmitted through any type of matter. Liquids and solids are even better conductors of sound than air because the individual particles in a liquid or solid are much closer together than the particles in air.
Sound waves transmit energy faster in substances with smaller spaces between the particles. A research question for students: Can sound be transmitted if there is no matter? Astronauts on the moon would find it impossible to talk to each other without the aid of modern electronic communication systems. Since the moon has no atmosphere, there is no air to compress or expand.
The temperature of the medium is also an important factor in determining the speed sound travels. As the temperature of the substance increases, the molecules move faster and collide more frequently. This increase in molecule collisions transfers energy more quickly. Sound travels through air at 344 miles/second if the temperature is 20o C, but only 332 miles/second when the temperature is 0o C.
To better understand sound waves, consider a large pipe or tube with a loudspeaker at one end. Although sound waves in this tube are similar in many respects to the waves on a rope, they are more difficult to visualize, because we cannot see the displacement of the air molecules as the sound wave propagates. The pulse of air pressure travels down the tube at a speed of about 340 miles/second. It may be absorbed at the far end of the tube, or it may reflect back toward the loudspeaker (as a positive pulse or a negative pulse), depending on what is at the far end of the tube.
Reflection of a sound pulse in a pipe for three different end conditions is illustrated in Figure 5. If the end is open, the excess pressure drops to zero and the pulse reflects back as a negative pulse of pressure as shown in Figure 5b; this is similar to the “fixed end” condition.
In an actual tube with an open end, a little of the sound will be radiated; most of it however, will be reflected as shown. If the end is closed, the pressure builds up to twice its value, and the pulse reflects back as a positive pulse of pressure; this condition shown in Figure 5c is similar to the “free end” reflection. If the end is terminated with a sound absorber, Figure 5d, there is virtually on reflected pulse. Such a termination is called “no echo.”
Table 1 Speed of sound in various materials
Substance 0 meters/sec. Feet/sec.
Air 0 331.3 1087
Air 20 343 1127
Helium 0 970 3180
Carbon Dioxide 0 258 846
Water 0 1410 4626
Methyl Alcohol 0 1130 3710
Aluminum - 5150 16,900
Steel - 5100 16,700
Brass - 3840 11,420
Lead - 1210 3970
Glass - 3700-5000 12 ñ 16,0005
Wave movements in two and three dimensions
So far we have discussed only waves that travel in one direction (ex. along a rope or in a pipe). One-dimensional waves of this type are a rather special case of wave motion. Often, waves travel outward in two or three dimensions from a source.
Water waves are an example of two-dimensional waves. Many waves can be studied conveniently by means of a ripple tank in a laboratory. A ripple tank uses a glass-bottom tray filled with water; light projected through the tray forms an image of the wave on a large sheet of paper or on a projection screen. If the materials were readily available, this would be an excellent exercise for the students. Many calculations could be performed from the data collected.
Three-dimensional waves are difficult to make visible. For this unit we will not explore the techniques used to identify 3-dimension waves.
____ A Science and Math activity: The timer at a track meet starts the watch when he hears the sound of the gun rather than when he sees the flash of the gun being fired. If the gun were 200 m away from him, how much faster or slower would the recorded time be than the actual time?
SOLUTION: The speed of light is 300,000,000 m/s, so the light gets there virtually instantaneously. The speed of sound is about 330 m/s, so the timed speed will be 0.3 s faster than the real speed.
A Science and writing activity: You have just formed a new company, Ultrasonic Unlimited. Develop an advertisement for a product that uses this sound energy. SOLUTION; an encyclopedia will describe many uses including scientific, industrial, medical and residential.
Loudness is the human perception of sound intensity. The higher the intensity and amplitude, the louder the sound. The intensity of a sound is measured in units called decibels (dB). Sounds with intensities above 120 dB may cause pain and hearing loss. Prolonged noise above 150 dB can cause permanent deafness. The roar of a racing car can be 125 dB, amplified music as high as 130 dB, and some toy guns 170 dB. Figure 7 shows some familiar sounds and their intensities in dB.
The effect of noise on the performance of various tasks has been the subject of several investigations in the laboratory and in actual work situations. When mental or motor tasks do not involve auditory (hearing) signals, the effects of noise on human performance have been difficult to assess.
Psychological effects of noise:
1. Steady noises below about 90 dB do not seem to affect performance.
2. Noise with appreciable strength around 1000 to 2000 Hz is more disruptive than low ñ frequency noise.
3. Noise is more likely to reduce the accuracy of work than to reduce the total quantity of work.
4. Noise appears to interfere with the ability to judge the passage of time.
5. There is a general feeling that nervousness and anxiety are caused by or intensified by exposure to noise.
Physiological effects of noise
Sudden noises are startling. They trigger a muscular reflex that may include an eye blink, a facial grimace or inward bending of arms and knees. These reflexes prepare the body for defensive action against the source of the noise. Sometimes these reflexive actions interfere with some tasks: sometimes they even cause accidents.
Constriction of blood vessels, reduction of skin resistance, change in heartbeat and secretion of saliva have been observed in human response to brief sounds. There is evidence that workers exposed to high levels of noise have a higher incidence of cardiovascular disorders such as ear, nose and throat problems and equilibrium problems than do workers at lower levels of noise.
@3H(after2H):Noise pollution When does noise become noise pollution? Noise pollution includes sounds that are loud, annoying or harmful to the ear. These sounds can come from sources such as jackhammers, a jet engine or highly amplified music.
Noise pollution can be harmful in several ways. Recall the way in which sound waves transfer energy through compressions and rarefactions. If the intensity of the sound wave is high enough, the energy carried can shatter windows and crack plastic.
When sound waves reach the human ear, the vibrations pass through various parts. Extremely intense vibrations can rupture the eardrum, but loudness-related hearing loss usually develops gradually. Your brain perceives sound when the auditory nerve carries a nerve impulse to the brain. The nerve is composed of many tiny nerve fibers surrounded by a fluid inside the ear. Hearing loss occurs when intense compressional waves traveling through the fluid destroys these nerve fibers. Loud sounds in the frequency range of 4000 to 20,000Hz cause most of the damage to these nerve fibers. Amplified music, motorcycles and machinery are sources of sound in this frequency range that often cause hearing loss after prolonged listening.
Controlling noise pollution (environmental acoustics)
Noise pollution can be controlled in a number of ways. Reducing the intensities of the sound waves from sources that cause noise pollution can decrease noise pollution. Acoustical engineers have quieted the noise made by many devices. For example, mufflers help quiet automobile engines. In buildings, thick heavy walls, well-sealed doors and windows, may be used to block sound. Builders use insulation to reduce sound. Industrial workers and other people exposed to intense noise should wear some form of ear protection to help prevent hearing loss.6
Musical sounds What is Music? Vibrations cause both music and noise, but there are some important differences. You can easily make a noise by just speaking a word or tapping a pencil on a desk, but it takes some deliberate actions to create music. Music is created using specific pitches and sound quality and by following a regular pattern.
A stringed musical instrument such as a guitar generates sound when you pluck a string. Plucking a string creates waves in the string. Because the ends of the strings are fastened, the waves reflect back and forth between the ends causing the string to vibrate at certain particular frequencies that are harmonically related to each other.
The guitar string, like most objects has a natural frequency of vibration. Plucking it causes the string to vibrate at its natural frequency.
If you were to play a note of the same pitch and loudness on a flute and on a piano, the sound wouldn’t be the same. These instruments have a different quality of sound. The quality does not refer to how good or bad the instrument sounds. Sound quality describes the difference among sounds of the same pitch and loudness. All sounds are produced by vibrations of matter, but most objects vibrate at more than one frequency. Distinct sounds from musical instruments are produced by different combinations of these wave frequencies.6
Mini-Lab: How can a hearing loss change the sounds you hear? To simulate a hearing loss, tune a radio station. Turn the volume down to the lowest level you can hear and understand. Turn the bass to maximum and the treble to minimum. If the radio does not have these controls, mask out the higher frequency sounds with heavy pads over your ears. Which voices are hardest to understand men's or women's? What letter sounds are the most difficult to hear, vowels or consonants? How could you help a person with a hearing loss understand what you say? Solution: Most hearing losses are in the higher frequencies of the speech range. Most affected are women's voices and consonant sounds. People with hearing losses should be spoken to face to face, at a steady, unrushed pace with a slight emphasis on consonant sounds.
Pythagoras was also one of the first to insist that precise definitions should form the cornerstone for logical proofs in geometry, although he is better remembered in this field for the unhistorical association of his name with the already well-known theorem about sums of the squares on the side of a right triangle. His teacher Thales, the first of the seven Wise Men of Greece had already brought deductive rigor to bear on geometry by introducing the concepts of logical proof for abstract propositions.
The most enduring contribution Pythagoras made to acoustical theory was to establish the inverse proportionality between pitch and the length of a vibrating string.
Aristotle (384 ñ 322 B.C.) probably deserves to be called the first mathematical physicist, since he was deeply concerned with the whole range of natural philosophy and with the use of mathematical reasoning as a tool for examining nature. The relative velocity of transmission of light and sound periodically commanded the attention of philosophers and scientist until nearly the middle of the eighteenth century. In referring to the physical nature of sound Aristotle wrote “lightning comes into existence after the collision and the [resulting] thunder, though we see it earlier because sight is quicker than hearing.” This inverted notion that thunder causes lightning persisted for centuries.7
Two other historians of antiquity expressed proper conclusions concerning the relative velocity of the transmission of light and sound. Pliny the Elder (A.D. 23-79) observed, “it is certain that when thunder and lightning occur simultaneously, the flash is seen before the thunderclap is heard (this is not surprising, as light travels more swiftly than sound).”
In about 400 B.C. Greek scholar Archytas (428 ñ 347 B.C.) expressed the fundamental idea that sound is always produced by the motion of one object striking another. This statement was paraphrased in one way or another and repeated by almost every writer of ancient and medieval times who considered the generation of sound. About 50 years later, the Greek philosopher Aristotle suggested that sound is carried to our ears by the movement of air. From then until about A.D. 1300, little scientific investigation took place in Europe, but scientists in the Middle East and India developed some new ideas about sound by studying music and working out systems of music theory.
European scientist began extensive experiments on the nature of sound during the early 1600’s. About that time, the Italian astronomer and physicist Galileo demonstrated that the frequency of sound waves determines pitch. Galileo scraped a chisel across a brass plate, producing a screech. He then related the spacing of the grooves made by the chisel to the pitch of the screech,
About 1640, Marin Mersenne, a French mathematician, obtained the first measurement of the speed of sound in air. About 20 yeas later, the Irish chemist and physicist Robert Boyle demonstrated that sound waves must travel in a medium. During the late 1600’s, the English scientist Sir Isaac Newton formulated an almost correct relationship between the speed of sound in a medium and the density and compressibility of the medium.
In the mid-1700’s, Daniel Bernoulli, a Swiss mathematician and physicist, explained that a string could vibrate at more than one frequency at the same time. In the early 1800’s, a French mathematician named Jean Baptiste Fourier developed a mathematical technique that could be used to breakdown complex sound waves into pure tones that make them up. During the 1860’s, Herman von Helmholtz, a German physicist, investigated the interference of sound waves, the productions of beats and the relationship of both to the ear’s perception of sound.8
Research Activity: Euclid of Alexandria (330-275 B.C.), Archimedes (ca. 287-212 B.C.), Galileo (1564-1642) and Plato (ca. 429-347 B.C.) are but a few of the great scientists of sound. Have student do research to get information on ancient scientist and modern day scientist who specialize in sound and acoustics.
We plan to teach these three units over one marking period using as many of the same students as possible. Upon completion of the teaching, we prepared a culminating activity. The participants in this activity, which will consist of a school-wide assembly, will be the students who made their instruments.
The assembly will consist of diverse entertainment from professional entertainers. Students will prepare poster board illustrations for each musical “family.” Students will display completed instruments. One student from each “Family” will explain how their instruments make music. The students will perform at least one musical number together. They will also perform with the professional performers using the instrument they made. The performers will come from various ethnic backgrounds.
Students will report on the contributions of some of the first people to study sound.
The program will conclude with an International feast prepared for all the participants in the assembly. On the day of the assembly, everyone will be encouraged to dress in ethnic clothes.
This text covers how far and fast sound travels. Shows how to design acoustical spaces and quiet spaces. Includes a case study of Philharmonic Hall.
Kock, W. E. Sound Waves and Light Waves. 1965. Doubleday, New York.
Text describes how sound waves and light waves travel. Explains sound pressure and sound velocity, pitch and frequency.
Rossing, Thomas D. The Science of Sound. 1990, AddisonñWesley.
This text covers advances topics including the perception and measurement of sound. It also explores the human voice and environmental noise.
Smith, Ballinger. Physical Science. Waves, Light and sound. 1998, Merrill Press, NY.
This textbook covers physical science. Chapter 18 introduced wave phenomena. The properties and behavior of transverse mechanical waves and properties of sound are covered.
Kryter, K.D. Noise and Man. 1970. Academic Press, New York.
This book explores the effects of noise on people. Explains how noise is defined and what can be done to protect you from the dangers of noise.
Hunt, Frederick E. Origins in Acoustics. The Science of Sound from Antiquity to the Age of Newton. 1978, Yale University Press, New haven and London.
This text covers the history of sound. The contributions of the persons first involved in science are discussed. The book starts with Pythagoras (570-497 B.C.) and ends with Boethius (A.D. 480-524).
2Rossing, T.D. The Science of Sound. Addison-Wesley, 1990. Page 90.
3Smith, Ballinger. Merrill-Physical Science. Waves, Light and Sound. McGraw Hill, 1993. Page 461.
4Rossing, T.D. The Science of Sound. Addison-Wesley, 1990. Page 86.
5Rossing, T.D. The Science of sound. Addison-Wesley, 1990. Page 41.
6Smith, Ballinger, Merrill-Physical Science. Waves. Light and Sound. McGraw Hill, 1993. Pages 463-467.
7Hunt, F.V. Origins in Acoustics. The Science of Sound. Yale University Press. Pages 9, 15, 21, 22, and 23.
8World Book Encyclopedia. Sound. 1991 Edition. Scott Fetzer Company, Page 605.
This section of the encyclopedia deals with sound. Included is the human Voice, animal sounds and musical sounds. Frequency and pitch, intensity and loudness and the speed of sound is explained.
Reuben, Gabriel H. What is Sound? Chicago: Benefic Press. 1960.
This book is part of the What Is series. Sound is explained using illustrations and drawings. The text is written to be easily understood by young readers.
Kettelkamp, Larry. The Magic of Sound. New York: William Morrow and Company. 1982.
This book explains why and how we hear as we do and describes some of the applications of sound in contemporary life. Easy reading for young readers.
Newman, Frederick R. Zounds. New York: Random House, 1983.
This book is a guide to sound making. Readers learn how their voice works and how it can be used to make many kinds of sound.
Broekel, Ray. Sound Experiments. Chicago: Childrens Press, 1983.
This is a book of simple sound experiments that can be conducted using household materials.
Contents of 2000 Volume V | Directory of Volumes | Index | Yale-New Haven Teachers Institute | http://www.yale.edu/ynhti/curriculum/units/2000/5/00.05.04.x.html | 13 |
103 | A Brief history of Einstein’s special theory of relativity. The main conclusions of Einstein’s special theory of relativity are the Lorentz transformation equations. They are called the “Lorentz transformation equations,” because they had already been discovered, before Einstein’s first paper, by H. A. Lorentz, taking a Newtonian approach. That is where I will pick up the story about the Einsteinian revolution in physics, since spatiomaterialism is merely following in the footsteps of Lorentz. What I will call the four “Lorentz distortions”are sufficient to explain all the of the predictions by which Einstein’s special theory of relativity has been confirmed.
Lorentz. By 1887, some eighteen years before Einstein’s paper, Michelson and Morley had made experiments that showed that light has the same velocity relative to any object, regardless of its own motion. What made their result puzzling was the Newtonian assumption that the medium in which light propagates is a “luminiferous ether,” a very subtle kind of material substance that was supposed to be at rest in absolute space. Given that the velocity of light is everywhere the same relative to absolute space, they expected that the velocity of light, as measured from a material object, to vary with that object’s own velocity in absolute space—just as the velocity of ripples propagating in a pond arrive faster (or slower), when a boat is moving toward them (or away from them).
Michelson and Morley used an interferometer, which compares the two-way velocities of light in perpendicular directions; that is, light is reflected back from mirrors in perpendicular directions and the signals are compared to see if one is lagging behind the other. They made measurements at various points in the Earth’s orbit around the sun, where the Earth should have different velocities in absolute space. On a moving object, the time it takes for light to travel both to and from a distant mirror in the direction of absolute motion should be different from the time it takes to travel an equal distance in the transverse direction. The margins of error were small enough, given the velocity of light and the velocity of the Earth in its orbit around the sun, that it should have been possible for their interferometer to detect absolute velocity. But Michelson and Morley failed to detect any difference at all in the time it took light to travel the same distance in perpendicular directions. Absolute motion could not be detected.
Length contraction. The Michelson-Morley result was surprising, but even before Einstein published his special theory in 1905, Lorentz had proposed a Newtonian explanation of it. Lorentz showed, in 1895, that their result could be explained physically, if the motion of such an apparatus in absolute space caused its length to shrink in the direction of motion as a function of its velocity by a factor of . Lorentz argued that this length contraction is a real physical change in the material object that depends on its motion relative to absolute space.
The equation was L=Lo, where Lo was the length at absolute rest. The shrinkage had been proposed independently by George F. Fitzgerald in 1889 and hence became known as the “Lorentz-Fitzgerald contraction”.
Lorentz tried to explain the length contraction physically, as an effect of motion through a stagnant ether on the electrostatic forces among its constituent, charged particles. But he could just as well have taken it to be a law of physics, making the Lorentz-Fitzgerald contraction the discovery of a new, basic physical law. (An ontological explanation of it will be suggested in the last section of this discussion of the special theory of relativity.)
Lorentz also described the length contraction as a mathematical transformation between the coordinates of a reference frame based on the moving material object and the coordinates of a reference frame at absolute rest. Lorentz started with the Galilean transformation by which Newtonians would obtain the spatial coordinates used on an object in uniform motion in the x-direction, or x’ = x - vt, and combining that with the length contraction he had discovered, he came up with the transformation equation, for obtaining the spatial coordinates on the moving material object.
Time dilation. There is, however, another distortion that material objects undergo as a function of their absolute motion. That is a slowing down of clocks (and physical processes generally) at the same rate as the length contractions, or the so-called "time dilation," which took somewhat longer for Lorentz to discover.
The Galilean transformation for time in Newtonian physics is simply t = t' , because Newtonian physics assumes that time is the same everywhere. But by using transformation equations to describe the distortions in material objects, Lorentz found that he had to introduce a special equation for transforming time: t’ = t - vx/c2 (Goldberg, p. 94). The new factor in the transformation equation, vx/c2, implied that time on the moving frame varies with location in that frame. Lorentz called it "local time," but he did not attribute any physical significance to it. "Local time" is not compatible with the belief in absolute space and time, and Lorentz described it as “no more than an auxiliary mathematical quantity” (Torretti, p. 45, 85), insisting that his transformation equations were merely “an aid to calculation” (Goldberg, p. 96).
The slowing down of physical processes is called “time dilation.” Lorentz discovered this distortion by tinkering with various ways of calculating the coordinates used on inertial reference frames in relative motion. Thus, it is natural to describe time dilation as the slowing down of clocks on the moving reference frame. It was included in the final version of Lorentz's explanation, now called the “Lorentz transformation equations.” (Lorentz 1904) Those equations contained not only the length contraction and transformation for “local time”, but also the implication that clocks on moving frames are slowed down at the same rate as lengths are contracted (that is, ). The final Lorentz equation for time transformation included both the variation in local time and time dilation: .
Though Lorentz took the distortions that he discovered in fast-moving material objects to be laws of nature, he did not think that they were basic. He thought they were effects of motion on the interactions between electrons and the ether which could be explained by his electronic theory of matter, and he saw explaining this effect as the the main challenge to Newtonian physics. The transformation equations themselves never seemed puzzling to Lorentz, because he never took them to more than just a mathematical aid to calculation.
Poincaré. H. Poincaré thought he saw more clearly what Lorentz had discovered than Lorentz himself. As early as 1895, Poincaré had expressed dissatisfaction with Lorentz’s piecemeal approach, introducing one modification of the laws of Newtonian physics after another in order to account for different aspects of the phenomenon discovered by Michelson and Morley. Instead of such ad hoc modifications, he urged the recognition of what he called a “principle of relativity” to cover all the phenomena involved in fast-moving objects. As Poincaré put it in 1904, the principle of relativity requires that “the laws of physical phenomena should be the same for an observer at rest or for an observer carried along in uniform movement of translation, so that we do not and cannot have any means of determining whether we actually undergo a motion of this kind” (from Torretti, 83).
A principle of relativity like this had, in effect, been affirmed by Newton himself, when he admitted that his laws of motion depend, not on the absolute velocities of material objects, but only on their relative velocities. That is, Newton had already denied that absolute rest could be detected by mechanical experiments. It seemed that absolute motion could be detected only when Maxwell had discovered that light could be explained as an electromagnetic wave. Thus, Poincaré saw Lorentz's discovery of distortions in fast-moving material objects as a way of extending Newton’s principle of relativity to cover electromagnetic phenomena.
Understanding how the undetectability of absolute motion could be a result of the distortions that Lorentz had discovered, he referred to Lorentz theory as “Lorentz’s principle of relativity” even after Einstein had published his special theory and Lorentz himself was attributing the principle of relativity to Einstein (Torretti 85, Goldberg 212, and Holton 178). Indeed, Poincaré joined Lorentz in the attempt to explain the Lorentz distortions by the motion of material objects through absolute space, also expecting to find their cause in the dynamics of electrons; he also thought that motion through the ether caused material objects to shrink in the direction of motion and natural clocks to slow down by the exact amount required to mask their motion, as implied by Lorentz’s transformation equations (Goldberg 94-102, Torretti 38-47). Furthermore, Poincaré apparently thought that what Lorentz said about those equations in his 1904 work answered his own demand that it be a “demonstration of the principle of relativity with a single thrust” (Goldberg 214-15).
Lorentz's explanation of the distortions was not, however, a complete explanation of the principle of relativity. There are really two quite different aspects of the phenomenon described by the principle of relativity, and Lorentz had explicitly explained only one of them.
What Lorentz’s electron theory of matter (and Poincaré’s own refinements of it) explained physically were the Lorentz distortions in material objects with absolute velocity. That explained the negative outcome of the Michelson-Morley experiment: the contraction of lengths in the direction of motion and the slowing down of clocks as a function of motion through absolute space does make it physically impossible to detect absolute motion on a moving object by measuring the velocity of light relative to it. And that is one way in which inertial reference frames are empirically equivalent, because it holds of measurements made using any material object in uniform motion as one's reference frame, regardless of its motion through absolute space.
But there is more to the principle of relativity than explaining the null result of the Michelson-Morley experiment. The transformation equations that Lorentz constructed to describe the effects of absolute motion on material objects predict the outcomes of other experiments, such as attempts to measure directly the lengths of high-velocity measuring rods and the rate at which high-velocity clocks are ticking away. Though such experiments are more difficult to perform, they are conceivable, and Lorentz's equations do make predictions about them: moving measuring rods will be shrunken in the direction of motion and moving clocks will be slowed down. That suggests another way of detecting absolute motion. One might compare measuring rods or clocks that are moving at a whole range different velocities with one another and take the one with the longest measuring rods and quickest clocks to be closest to absolute rest. Hence, the principle of relativity would be false.
It is not possible, however, to detect absolute rest in this way, and as it happens, its impossibility is also predicted by Lorentz's theory, because he formulated his description of the Lorentz distortions in terms of transformation equations. Transformation equations are equations for transforming the coordinates obtained by using one material objects as a frame of reference into the coordinates obtained by using another material object as a frame of reference, and to be consistent, they must work both ways. That is, it must be possible to obtain the original coordinates by applying the transformation equations to the transformed coordinates. Thus, whatever distortions observers at absolute rest may find in material objects with a high absolute velocity will also be found by observers in absolute motion in material objects that are at absolute rest.
The recognition that Lorentz's theory, being formulated in terms of transformation equations, implied that all such inertial reference frames are empirically equivalent is presumably what led Poincaré to proclaim that Lorentz had finally explained the truth of the principle of relativity. Absolute rest and motion cannot be detected from any inertial reference frame.
Lorentz's theory was not, however, an adequate explanation of the principle of relativity, for there is still something puzzling about the empirical equivalence entailed by the symmetry of the Lorentz transformation equations.
Lorentz meant his transformation equations to be a way of describing the length contraction and time dilation in material objects with absolute motion, for that would explain the Michelson-Morley experiment, that is, why absolute motion cannot be detected by measuring the velocity of light in different directions. But since the transformation equations describe a symmetry between the members of any pair of inertial reference frames, they imply that observers using a fast-moving material object as the basis of their reference frame would observe a length contraction in measuring rods that were at absolute rest and a time dilation in clocks at absolute rest. That makes it impossible to detect absolute rest or motion by comparing different inertial reference frames with one another. But it is puzzling, because it is hard to see how both views could be true at the same time, that is, how two measuring rods passing one another at high velocity could both be shorter than the other and how two clocks passing by one another could both be going slower than the other.
In other words, Lorentz's theory does not really give a physical explanation of what Poincaré called the "principle of relativity." What entails the truth of the principle of relativity is the description of the Lorentz distortions in terms of transformation equations; the inability to detect absolute rest and motion by comparing inertial frames with one another comes from the symmetrical relationship that transformation equations represent as holding between the members of any pair of inertial reference frames. That symmetry is not physically possible, at least, not in the sense of "physical" that Lorentz had in mind when he tried to explain the distortions as occurring to material objects because of their motion in absolute space. If inertial frames are material objects in absolute space, then their measuring rods cannot both be shorter than the other and their clocks cannot both be slower.
As we shall see, what enables Lorentz's transformation equations to predict the symmetry of distortions is the "local time" factor in the time equation, vx/c2, which Lorentz insisted was just an "aid to calculation." It represents the readings that would be given by clocks on a moving reference frame that have been synchronized by using light signals between them as if they were all at absolute rest, that is, on the assumption that the one-way velocity of light is the same both ways along the pathway between any two clocks (as required by Einstein's definition of simultaneity at a distance). That assumption is false, as Lorentz understood these phenomena, and clocks on the moving inertial frame would be mis-synchronized. It can be shown, as we shall see, that this way of mis-synchronizing clocks on a moving frame combines with the Lorentz distortions that the moving frame is actually suffering to make it appear that its own Lorentz distortions are occurring in the reference frame at absolute rest (or moving more slowly). This is a physical explanation, given how the other frame's measuring rods and clocks are measured. But it is an explanation of the principle of relativity that reveals it to be the description of a mere appearance. Though there is an empirical equivalence among inertial frames, a physicist who accepted Lorentz's Newtonian assumptions would insist that it has a deeper physical explanation.
It was not Lorentz, however, but Poincaré who declared that Lorentz had explained the truth of the principle of relativity, and Poincaré's acceptance of Lorentz's explanation as adequate may have been colored by his own philosophical commitment to conventionalism. Poincaré viewed the choice between Euclidean or non-Euclidean geometry as conventional, and he argued that convention is also what raised inertia and the conservation of energy to the status of principles that could not be empirically falsified. Poincaré's acceptance of the principle of relativity should probably be understood in the context of this more or less Kantian skepticism about knowing the real nature of what exists. Considering how the standard of simultaneity at a distance varies from one inertial reference frame to another (depending on the "local time" factor in the Lorentz transformation equations), the principle of relativity could also be seen as a conventional truth.
Poincaré's pronouncement that Lorentz's theory had explained the principle of relativity could not have set well with Lorentz himself. Lorentz may have continued to call it "Einstein's principle of relativity" because he realized that it was not explained by his theory about how spatial and temporal distortions are caused in material objects by their absolute motion. What is responsible for the principle of relativity is the symmetry in pairs of inertial frames entailed by his equations being transformation equations. If the distortions didn’t hold symmetrically in any pair inertial frames, it would be possible to detect absolute rest and motion. But to my knowledge, Lorentz never argued explicitly that what he called "local time" on the moving material object (that is, vx/c2 in the time equation) represents a mis-synchronization of clocks on the moving frame that causes the moving frame's own Lorentz distortions to appear to be occurring in the other inertial reference frame.
The Newtonian explanation of all the relevant phenomena did not, therefore, have an adequate defender. Lorentz was more concerned to find an adequate physical explanation of the distortions he had discovered in material objects, and Poincaré was more interested in defending conventionalism. That is the Newtonian context in which Einstein's special theory of relativity won the day.
Einstein. Einstein took a dramatically different approach from both Lorentz and Poincaré. Instead of taking the principle of relativity to be an empirical hypothesis that could be explained physically by deeper, Newtonian principles, or as a conventional truth, Einstein raised the principle of relativity to the status of a postulate, which was not to be explained at all, but rather accepted as basic and used to explain other phenomena (Zahar 90-2). The mathematical elegance of Einstein's explanation of these phenomena is stunning. From the premise that all inertial reference frames are empirically equivalent, he derived a description of how two different inertial reference frames would appear to each other; that is, he deduced the Lorentz transformation equations.
Einstein's new approach can be seen most clearly by considering the structure of his argument. It is represented below in a diagrammatic form.
The Principle of Relativity
|The laws of nature apply the same way on all inertial frames.|
|The Light Postulate||The velocity of light is the same on all inertial frames.|
|The Definition of Simultaneity at a Distance||The local event halfway through the period required for light to travel to the distant event and back is simultaneous with the distant event.|
|To obtain the second frame's coordinates from the first frame:||To obtain the first frame's coordinates from the second frame:|
|Lorentz transformation equations (kinematic phenomena)||
|Relativistic increase in mass (dynamic phenomena)||
The assumption that inertial frames are all empirically equivalent takes the form of three premises in Einstein’s argument: the Principle of Relativity, the Light Postulate, and Einstein's Definition of Simultaneity at a Distance (see table). Einstein's principle of relativity holds, with Poincaré, that the laws of nature hold in the same way on every inertial reference frame. That allowed Einstein to assume that Maxwell's laws of electromagnetism hold universally, and he considered what would be true of two different inertial frames in the same world. But in order to deduce the Lorentz transformation equations, Einstein also had to assume that that the velocity of light is the same relative to every inertial frame (the light postulate) and, accordingly, that simultaneity at a distance is defined on each reference frame as if the velocity of light is the same both to and back from a distance object.
What Einstein deduced from these premises are the “Lorentz transformation equations,” that is, equations for transforming the coordinates of any given inertial reference frame into those of any other.
The Lorentz transformation equations imply that any material object moving relative to any other inertial frame at a velocity approaching that of light will appear to suffer the Lorentz distortions: its clocks (and all physical processes) will be slowed down, and its measuring rods (and all material objects) will be shortened in the direction of its motion—both by the same amount, , which is a function of its velocity in the observer’s reference frame.
Einstein also inferred from these kinematic distortions and his principle of relativity that the mass of objects moving in an inertial frame increases at the same rate, making three distortions altogether. That dynamical implication is the source of Einstein's most famous equations, E = mc2.
It should be emphasized that there are really two sets of transformation equations. It may not seem that way, because Einstein's conclusion is often stated as just one of the two sets of equations listed above, making it look mathematically simpler. But that formulation overlooks a mathematical detail and thereby obscures what Einstein's conclusion is about.
Though the Lorentz transformation is exactly the same both ways between the members of any pair of inertial reference frames, it requires two, non-identical sets of transformation equations, because their relative velocity has the opposite sign for each observer. That is, the two coordinate systems are set up so that their origins coincide when t = 0 and t' = 0, and since they are moving in opposite directions, the relative velocity is v for one of them and -v for the other. Thus, in order for the transformation to be symmetrical, one set of transformation equations has to have the opposite sign for the second factor in the numerator of the equations for space and time.
Since this seems to be a mere technicality, the conclusions of Einstein’s argument are usually represented as a single set of Lorentz transformation equations (the first set in the above table). Duplication is avoided by introducing a special mathematical symbol to make the single set of equations represent both transformations in any pair of inertial frames. Thus, Einstein's conclusion seems more like just another universal law of nature. But this is just homage to the Pythagorean ideal of mathematical simplicity, which obscures the fact that Einstein's theory is, in the first instance, about the symmetry that holds between the members of every pair of inertial frames.
It should also be emphasized that Einstein's theory is about how reference frames are related, and only indirectly about the material objects on which they are based. Though it does have implications concerning the relationship between material objects with a high relative velocity, that relationship is described by way of a mathematical transformation that holds between the reference frames based on them.
Inertial reference frames are based on material objects that are not being accelerated, and what makes the material object a reference frame is that it is used as the basis for a coordinate system by which the locations and times of events throughout the universe can be measured. (For this purpose, it is useful to think of an inertial reference frame as a grid of rigid bars extending wherever needed in space with synchronized clocks located everywhere.)
Notice that Einstein's three premises are all about reference frames based on material objects. Indeed, his definition of simultaneity prescribes how clocks must be synchronized to set up such a reference frame. The light postulate makes explicit the assumption about the velocity of light on which his definition of simultaneity is based. And the principle of relativity states that all the laws of physics will hold the same way within that reference frame as every other one, that is, will make correct predictions about what happens in that reference frame.
Einstein derives conclusions from his premises by assuming that there are two different inertial reference frames in the world and figuring out how they must appear to one another. Since his premises are about their reference frames, it is hardly surprising that his conclusion is about a mathematical transformation between their coordinates.
Indirectly, however, Einstein's conclusion is a description of how material objects with different constant velocities are related to one another as parts of the same world, since the reference frames in question are based on material objects. But to see Einstein's conclusion as a description of how material objects are related in space is to take Lorentz's approach. For Lorentz, these same transformation equations were just a mathematically convenient way of describing from the absolute frame the spatial and temporal distortions that occur in material objects with a high velocity in absolute space.
By calling his argument a theory of relativity, Einstein emphasized that his theory is about the empirical equivalence of all inertial reference frames, not the relationship between the material objects on which they are based. Observers on each inertial reference frame have their own view of the relationship between the material objects involved, but they are different views, and it is their views that are related by the Lorentz transformation equations. The symmetry of the relationship between their reference frames is what is crucial for Einstein, because that is what rules out any way of detecting absolute rest or motion by comparing inertial frames to one another and ensures that there is nothing to distinguish one inertial frame from another except their velocities relative to one another.
The Lorentz distortions in material objects are, however, a consequence of the Lorentz transformation equations that Einstein deduced. And if one does follow Lorentz, interpreting them as a way of describing the material objects on which the inertial reference frames are based, then the Lorentz transformation equations lead to paradoxes, as I have already suggested. Those equations imply that observers using any given inertial reference frame will find the Lorentz distortions occurring in the material objects on which the other inertial reference frame is based, and thus, the symmetry of the transformation for any pair of inertial frames leads to paradoxes.
Consider two inertial frames in motion relative to one another. From the first frame it appears that clocks on the second frame are slowed down. That would make sense, if from the second frame, it appeared that first-frame clocks were speeded up. But special relativity implies that it also appears from the second frame that clocks on the first frame are slowed down. That is, the distortions are symmetrical on Einstein’s theory, not the reverse of one another, as one might expect. And if the Lorentz distortions are really symmetrical, it is inconceivable that the two inertial frames are just material objects moving relative to one another in absolute space, because in absolute space, there can’t be two clocks next to one another both of which are actually going slower than the other. If one assumes that Einstein's theory is describing material objects, one must give up the assumption that those objects are located in absolute space. They are, of course, parts of the same world, but they must be related to one another in some other way.
The same problem arises from the symmetry of the length contraction and relativistic mass increase, for there cannot be two measuring rods passing one another in space that are both shorter than the other. Nor can there be two material objects both be more massive than the other. It is simply not possible for material objects located in absolute space.
None of this should be a surprise, however, because even the Light Postulate itself is incompatible with absolute space (or at least, with the assumption that light has a fixed velocity relative to absolute space). Though Newtonian physics had taken absolute space to contain the medium in which light propagates, Einstein assumed that the velocity of light relative to every object is the same, regardless of their own velocities relative to other objects in the world. Thus, Einstein held that the velocity of light would be the same in both members of any pair of inertial frames. This is not possible, if electromagnetic waves propagate through (an ether in) absolute space, like waves in water, for the motion of an object through waves propagating in space would change the velocity of those waves relative to the object—just as the motion of a row boat through ripples propagating in a pond changes the velocity of those ripples relative to the boat.
Taken as a description of the relationship between material objects in space, therefore, Einstein's special theory of relativity leads to paradoxes. But Einstein was not discouraged by these paradoxes. He was not thinking of inertial reference frames as material objects that are related in space, that is, in absolute space, or a space that is the same for both material objects. He was making a more abstract, mathematical argument and, in the process, giving physics a new standpoint from which to explain all physical processes.
That Einstein's basic approach is different from Lorentz's can be seen in what made Einstein curious about these phenomena in the first place. It was not the Michelson-Morley experiment, but rather something peculiar about the connection between classical mechanics and Maxwell’s theory of electromagnetism (Zahar 99-100). Einstein realized that even though Maxwell’s theory was standardly interpreted as referring to absolute space, absolute space was not needed in order to explain electromagnetic phenomena. For example, a conductor moving through a magnetic field at absolute rest moves electrons exactly the same way as if it were at absolute rest and the magnetic field were moving. That is what suggested the principle of relativity to Einstein, and though from it he derived the same transformation equations that Lorentz had proposed in 1904, Einstein claimed not to know about Lorentz's 1904 work.
By raising the principle of relativity to the status of a postulate, Einstein was assuming, in effect, that the deepest truth that can be known about the nature of space and time is that inertial frames are all empirically equivalent. And by relying on the predictions of measurements derived from that principle to justify his theory, Einstein had the support of the positivists, who dominated philosophy of science at that time. Indeed, Einstein admits to having been influenced by Ernst Mach at the time of his first paper on special relativity. To positivists, the paradoxes mentioned above about two clocks both going slower than the other and two measuring rods both shorter than the other are not real problems, but merely theoretical problems. Theoretical propositions that could not be spelled out in terms of observations were dismissed as "metaphysical," as if theories were mere instruments for making predictions. That attitude could be taken about the aforementioned paradoxes, because there is never any occasion in which two clocks can be directly observed both going slower than the other (or two measuring rods observed both shorter than the other). Observations are made from one inertial reference frame or another, and if both members of some pair of inertial frames are observed from a third reference frame, their clocks and measuring rods do not appear this way because of the Lorentz distortions that are introduced by its own velocity relative to them.
Though when taken as a description of material objects, the special theory of relativity is incompatible with the existence of absolute space, Einstein did not attempt to use its implications to show that absolute space does not exist. He was making a mathematical argument to show that accepted theories in Newtonian physics, which did assume the existence of absolute space, could all be replaced by theories that do not mention absolute rest or motion at all. All he explicitly claimed was that physics does not require an “absolutely stationary space” and that the notion of a “‘luminiferous ether’ will prove to be superfluous” because the “phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the ideas of absolute rest” (Einstein, 1923 p. 37). It could be argued, therefore, that Einstein was merely imitating empiricist skepticism about theoretical entities generally by casting doubt on the reality of absolute space.
As it turned out, Einstein's theory proved to be remarkably successful in making surprising predictions of new experiments. For example, unstable particles have longer half-lives when moving at velocities approaching that of light. Clocks flown around the earth are indeed slowed down compared to clocks that stayed at home. The most famous new prediction of special relativity, E = mc2, has been confirmed repeatedly. It is a consequence of the relativistic increase in mass, which Einstein first pointed out, and without it, high energy physics as we know it today would be inconceivable. Finally, the equations of special relativity have become (after Dirac) the foundation of quantum field theory as well as Einstein’s theory of gravitation. The Lorentz transformation is now so basic to physics that “covariance” (or “Lorentz covariance”) is taken as a constraint on all possible laws of physics.
To be sure, Newtonian physicists complained about the loss of intuitive understanding that came with the acceptance of Einstein's way of explaining these phenomena. It was no longer possible to construct in ordinary spatial imagination a picture of the nature of the world. But that objection did not detract from the predictive success of Einstein's theory, and the Einsteinian revolution made the capacity of mathematical arguments to make surprising predictions of precise measurements the establishment criterion for accepting theories in contemporary physics.
But physics is not just mathematics. A theory in physics is generally thought to be true when it corresponds to what exists, and if the special theory of relativity does not correspond to material objects in absolute space, we want to know what it does correspond to. The success in making surprising predictions of what happens by which Einstein's theory has been confirmed means that it corresponds to regularities that hold of change in the world, but it is natural to want to know the nature of what exists that makes those regularities true. The answer given by contemporary physics is spacetime, and it was Minkowski that has made that answer possible.
Minkowski. In 1908, Minkowski offered a mathematically elegant way of representing what is true from all inertial frames, according to Einstein’s special theory of relativity, using only the coordinates of any single inertial frame. His was a “graphic method” which he said allows us to “visualize” what is going on. The key to his diagram was to represent time in the same way as space, and that is what has led to the belief that what exists is not space and time, but rather spacetime.
In Minkowski’s “spacetime diagrams”, time is represented as a fourth dimension perpendicular to the three dimensions of space (though when comparing two inertial frames, the spatial dimensions can be reduced to one by a suitable orientation of their coordinate frames). A material object at rest in space is represented, therefore, as a line running parallel to the time axis, and a material object with a constant, non-zero velocity is represented by a line inclined slightly in the direction of motion. Units for measuring time and space are usually chosen so that the path of light in spacetime (the “light-line”, t = x/c) bisects the time and space axes, making the “basic unit” of distance how far light travels in a unit of time.
Since the second frame of reference is based on a moving object, we can think of the tilted line representing its pathway as its time axis. From such a moving reference frame, the location of an object at rest in the first frame (such as one always located at its origin) would change relative to the moving frame. So far, this diagram of space and time would be acceptable in classical Newtonian physics, because it represents a so-called Galilean transformation for the coordinates of moving reference frames (in which distances in space would be related as x' = x – VT, where v is their relative velocity in the x-direction.)
What Minkowski discovered was that the Lorentz transformation for moving reference frames could be represented by tilting the space line of the moving frame equally in the opposite direction and lengthening the units of time and space. That is, the time-line and the space-line of the moving frame are inclined symmetrically around the pathway of light. (See the comparison of the Newtonian Diagram of Space and Time and Minkowski's Spacetime Diagram.)
In either the Newtonian or Minkowski's diagram, every point represents the location of a possible event in space and time (called a “world-point”), and superimposing a second reference frame makes it possible to give such coordinates in either reference frame. From the coordinates for any event in the first reference frame, we can simply read off the coordinates for the same event in the moving reference frame, and vice versa. In the case of event E, for example, the coordinates in the first frame are (2,1), and in Minkowski's diagram, they are (1.3,0.3). All possible reference frames can be represented in this way, each with a different tilt to its time-axis representing its velocity relative to the first.
The two reference frames in the Newtonian diagram have a very simple relationship, because time coordinates are the same for both reference frames and there is no change in the units of either time or space. But Minkowski's spacetime diagram represents the Lorentz transformation, and not only are the units of time and space different, but the space-line of the moving reference frame is inclined relative to the first reference frame.
Minkowski’s spacetime diagram yields the same coordinates for the second reference frame that are obtained from the Lorentz transformation equations deduced by Einstein. Thus, it predicts that measurements of the second inertial frame will reveal its clocks to be slowed down and its measuring rods to be contracted in the x-direction.
But since the Lorentz transformation works both ways, it is possible to start with the second (tilted) reference frame and obtain coordinates for events in the first reference frame. Thus, it predicts that the moving observers will detect Lorentz distortions occurring in the first frame. This symmetry about the relationship between inertial reference frames makes it impossible to single out any particular frame as being at absolute rest by comparing reference frames with one another.
Minkowski's spacetime diagram may seem to mitigate the paradoxes resulting from the symmetry of the relationship between members of any pair of inertial reference frames, because it enables us to "picture" two clocks both ticking away slower than the other and two measuring rods both shorter than the other. It is just a result of how the inertial reference frames are related to one another.
But this wonderful power of Minkowski's spacetime diagram to represent these puzzling phenomena would not be possible, if the space-lines of different reference frames had the same slope. The inclined orientation of the space-line of the second inertial frame relative to the first frame is crucial to representing the Lorentz transformation, and it represents a disagreement between inertial observers about simultaneity at a distance. That is, observers using different inertial reference frames will disagree about which events at a distance are simultaneous with the origins of their systems when they pass by one another. That is the source of all the ontological problems with the belief in spacetime.
Though it is possible to interpret Minkowski's spacetime diagram as just a useful mathematical device for predicting the measurements that would be made on different inertial frames, that is what the Lorentz transformation equations already do. The historical significance of Minkowski's diagram is that it enables us to "picture" what exists in a world where Einstein's special theory of relativity is the deepest truth about the world. Thus, it leads to the belief in spacetime (that is, "spatiotemporalism," as I called it in Spatiomaterialism, or "substantivalism about spacetime," as it is called in the literature.)
The belief in spacetime comes from realism about special relativity. Scientific realism holds that theories in physics are true in the sense of corresponding to what exists, and spacetime is what must exist, if Einstein's special theory of relativity is the deepest truth about the real nature of what exists as far as space and time are concerned.
With regard to space and time, Newtonian realists would say that what their theories correspond to is absolute space and absolute time, that is, to a three dimensional space all of whose parts exists at the present moment and endure simultaneously through time. But that is not what Einstein's special theory of relativity corresponds to, because it implies that observers on all possible inertial reference frames are equally correct about the times and places of the events that occur in the world, even though they disagree about the simultaneity of events at a distance. What all the different inertial observers say about the times and places of events can, however, be true at the same time, only if what exists is represented by Minkowski's spacetime diagram. Thus, spacetime is the natural answer to the question about what corresponds to Einstein's special theory of relativity. According to realists about special relativity, what exists is spacetime, a four-dimensional entity that contains time as a dimension and, thus, is not itself in time.
Though Einstein may merely have been arguing in the spirit of the empiricist skepticism that prevailed in philosophy at that time, Minkowski made it possible to give a realist interpretation of Einstein’s special theory. His spacetime diagram showed how Einstein's theory could be interpreted as a description of what really exists in the case of space and time. Minkowski must have realized that he was giving a realist interpretation of Einstein's special theory of relativity when he introduced his spacetime diagrams; he said (Minkowski 75) that “space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality”. In any case, later in the twentieth century, when logical positivism gave way to scientific realism, Einstein’s skepticism about absolute space, if that is what it was, spawned the belief in the existence of spacetime. Indeed, regardless what Einstein may have believed in 1905, he apparently came to agree that what he had discovered was spacetime. (See Einstein 1966, pp. 205-8).
Scientific realism is, however, a way of letting science determine one's ontology. That is not the best way to decide which ontological theory to accept, because the empirical method that science follows is to infer to the best efficient-cause explanation, and that may not be the best ontological-cause explanation. But we can see how realism led to an ontology based on spacetime.
Einstein's special theory of relativity was a better efficient-cause explanation of the relevant phenomena than Lorentz's way of defending his transformation equations, because it made all the same precise predictions of measurements, but in a mathematically simpler way. As an efficient-cause explanation, however, all that Einstein's special theory requires is an empirical equivalence of inertial reference frames. It assumes that inertial frames are experimentally indistinguishable from one another, and it derives a description about how they must appear to one another as parts of the same world (where Maxwell's laws of electromagnetism hold). That relationship is described by the Lorentz equations for transforming their coordinates into one another, and it is represented by Minkowski's spacetime diagram. But Einstein's was a mathematical argument, and no mechanism or cause of the empirical equivalence was given.
A realist interpretation of special relativity goes beyond mere empirical equivalence and holds that inertial frames are all ontologically equivalent. If special relativity is the literal and deepest truth about the world, then what observers on all possible inertial reference frames believe must be true at the same time. That is to hold, not merely that no experiment can distinguish any one inertial frame from all the others as the absolute frame, but that there is nothing about the nature of any inertial frame that makes it stand out from all the others. That means, among other things, that no assertion made by observers on one inertial frame can be true unless the same kind of assertion made by observers on every other inertial frame is also true. (Nor can any assertion made on one inertial frame be false unless the same kind of assertion made on every other inertial frame is also false.)
The virtue of Minkowski's spacetime diagram is that it enables us to "picture" what exists in a world where inertial reference frames are all ontologically equivalent. Though it may still be unclear what spacetime is, Minkowski's diagram does allow us to believe that all possible reference frames are related to what exists in the same way, for it accommodates all possible standards of simultaneity at a distance. But they can all correspond to what exists only if the world is a four-dimensional entity all of whose parts in both space and time exist in the same way.
It is clear that this ontological equivalence of inertial frames is incompatible with absolute space and time, because if space and time were absolute, one inertial frame would be singled out ontologically from all possible inertial frames. Only one of all possible inertial frames would have the correct standard of simultaneity. Its location in space and time could be shared by observers on many other inertial frames, but none of their claims about which distant events are simultaneous with their shared here and how would correspond to what exists.
Einsteinians do not use the term "ontological equivalence" to describe the relationship between different inertial reference frames, but that is what the belief in spacetime comes to. Most philosophers of space and time simply take it for granted that they must accept "substantivalism" about spacetime in order to interpret the special theory as a description of the real nature of what exists.
To believe in spacetime is to accept an ontology that is fundamentally different from Lorentz's Newtonian view, and the difference can be seen in what each implies about the nature of material objects.
Newtonian physicists assumed that material objects are substances that endure through time. They had to believe in absolute time, because the endurance theory of substances presupposes that only the present exists, or "presentism." (If the world is everything that exists, then objects that exist at only one moment in their histories must exist at the same time, for otherwise they would not be parts of the same world.) And since Newtonian physicists believed that material objects are all related to one another by (consistent) spatial relations, they were also forced to believe in absolute space. In a natural world, absolute time entails absolute space. Hence, the Newtonian world was made up of material objects in three dimensional space that endured through time.
Spacetime, on the other hand, is a four-dimensional entity. What exists is spacetime and all the events that are located in spacetime. Since time is an aspect of its essential structure, a spacetime world cannot endure through time. Thus, spacetime points and spacetime events must all exist in the same way independently of one another, if they exist at all. There are no material objects in a spacetime world, at least, not in the way that Lorentz believed. There are only the spacetime events that seem to make up the histories of so-called material objects. Thus, what is ordinarily called a "material object" is just a continuous series of spacetime events in spacetime. Its real nature is represented accurately by a “world line” in a spacetime diagram, because each spacetime event making up the history of a "material object" has an existence that is distinct from all the others, just as one point on a line exists distinctly from every other point on the line.
In short, whereas a material object in a Newtonian world exists only at each moment as it is present, but is identical across time, a so-called material object in a spacetime world is a continuous series of spacetime events, each of which exists eternally as a distinct part of the world. This is the difference between the endurance and perdurance theory of substances, and between the presentist and eternalist theory about time and existence.
Scientific realists sometimes assume that they can believe that Einstein's special theory of relativity corresponds to what exists without denying that they are themselves substances that endure through time by holding that only objects at a distance from themselves must exist the same way at all different moments in their histories. But that is not possible, if they believe that the truth of Einstein's special theory means that it corresponds to what exists for every observer. If Einstein's theory is universally true, then it must be true for inertial observers located elsewhere in the universe, and the only way that different inertial observes at a distance from us can all be correct about which moment in our local history is simultaneous with their passing by one another is if the moments in our local history all exist in the same way. We must perdure, rather than endure, because we are material objects at a distance for inertial observers elsewhere in the universe.
What Minkowski's “union” of space and time means ontologically is, therefore, that presentism is false. The denial of presentism is such a serious obstacle to an ontological explanation of the world that, in Spatiomaterialism, we were led to reject spacetime substantivalism (or "spatiotemporalism"), promising to justify it later by showing how it is possible for space and time to be absolute, despite the Einsteinian revolution. That is the argument we take up in the next section. But first, let us consider briefly why physics has ignored the ontological problems with eternalism.
What explains the ascendancy of the belief in spacetime is, once again, the empirical method of science and the physicists' addiction to mathematics as a means of practicing it. Behind Minkowski's spacetime diagram lies an elegant equation that has proved to be irresistibly attractive.
Minkowski provided a method of constructing in our own spacetime coordinate frame the spacetime coordinate frame that would be used by observers on an object moving relative to us. We may call their world-line the “moving timeline” (t = x/v), because it will be the time axis that moving observers use for their spacetime coordinate frame.
Minkowski formulated the conclusion of Einstein’s special theory as an equation that describes a hyperboloid in four dimensional spacetime: 12 = c2t2 - x2 - y2 - z2. (When we orient our x-axis in the direction of the others’ motion, we can ignore the other two dimensions and it reduces to 12 = c2t2 - x2.) (It is the red curve in the diagram depicting how Minkowski's spacetime diagram is constructed.) The intersection of Minkowski’s hyperboloid curve with our time-axis is the unit of time in our frame (t = 1), and the unit of distance (in “basic units”) is the distance in our frame that light travels during that period of time (x = 1). The moving timeline (the time-axis of the moving spacetime frame) also intersects the curve described by Minkowski’s equation, and the distance of that point along our time-axis is the length of a unit of time on the moving coordinate frame according to our clocks.
As the diagram shows, moving clocks are slowed down in our frame. The other axis of the moving spacetime frame, the “moving space-line”, is also deduced from Minkowski’s equation. Moving space-lines all have the same slope as the tangent to Minkowski’s curve at the point of the moving timeline’s intersection with his curve. (Its slope is v/c2; the points on any line with this slope are simultaneous in the moving spacetime frame.) Finally, the unit of distance on the moving space-line is how far light travels in the moving frame during a unit of time on the moving frame.
Inertial frames are all equivalent on Minkowski’s theory, as on Einstein’s, since Minkowski’s equation determines precisely the same hyperbola in every moving inertial frame constructed this way in our own spacetime coordinate frame. That is, their hyperbolas all coincide. In particular, the same procedure on the moving coordinate frame, using the same equation (and taking the velocity to be -v along the x'-axis), produces the original coordinate frame. Or more abstractly, Minkowski’s equation can be generalized as a measure, s, of the separation between any two events that is the same in every inertial frame, despite variations in their coordinates for particular events: s2 = c2t2 - x2 - y2 - z2.
In Minkowski’s equation, the parallel between the representation of space and time is remarkable. Time would be just another spatial dimension, except that it lacks a minus sign (and needs the velocity of light, c, to make units of time commensurable with distance). Indeed, that is how Minkowski includes relativistic mass increase. His equations’s form can be used to state the laws of nature that hold true in every inertial frame. In “four vector physics”, or “covariant” formulations of laws of physics, the energy of an object, E, takes the place of time and the three dimensions of momentum, p, take the place of the three spatial dimensions, so that the objects’ rest mass, m0, rather than the separation, is what is the same about the object in all inertial frames: mo2c4 = E2 - px2c2 - py2c2 - pz2c2. The mathematics of four vector physics is so elegant and suggestive about the relationship of energy and momentum that it is not surprising that physicists now find themselves committed to the belief in spacetime.
By comparison with Lorentz’s ad hoc attempts to patch up classical physics in the wake of the Michelson-Morley experiment, Einstein’s argument was astonishingly simple and elegant, making it seem that Einstein had a deeper insight into these phenomena. And since Minkowski provided a diagram that made it possible to represent what special relativity implies about the world independently of particular reference frames, it is hardly surprising that the belief in spacetime has become the orthodox ontology in physics and the philosophy of science.
The acceptance of Einstein’s special theory of relativity involved, however, a remarkable change in the empirical method of physics, for it involved the abandonment of the requirement that explanations in physics be intuitively intelligible.
To follow the empirical method is to infer to the best efficient-cause explanation. Even in classical physics, theories were highly mathematical and confirmation was most convincing when they predicted surprising, quantitatively precise measurements. But since classical physicists still believed in absolute space and time, they also expected the best scientific theories to be intuitively intelligible, in the sense that it was possible to think coherently about what was happening in spatial imagination. But intuitive intelligibility was no longer possible when the best scientific theory required giving up the belief in absolute space and time. That was undeniably a loss, but physicists felt that they had to grow up and recognize that their deepest commitment was to judging the best theory by which is the simplest and most complete prediction of measurements. Since this came from mathematical theories, abandoning the requirement that physical explanations be intuitively intelligible left them addicted to mathematics.
This is because the velocity of light relative to the object in motion is different in opposite directions, and going one way the whole distance at the lower (relative) velocity takes more extra time than it can make up coming back over the same distance at the higher (relative) velocity. Though the path back and forth is spatially symmetric, the effect of the velocity of light relative to the frame on the time of travel accumulates per unit time, and so the signal loses more time than it gains.
The equation was L=Lo, where Lo was the length at absolute rest. The shrinkage had been proposed independently by George F. Fitzgerald in 1889 and hence became known as the “Lorentz-Fitzgerald contraction”. Relevant portions of Lorentz’s 1985 monograph and 1904 theory are reprinted in Lorentz, et al, (1923, pp. 3-84).
See Stanley Goldberg (1984, p. 98) and Roberto Torretti (1983, pp. 45-6). Hereafter, these works are referred to as “Goldberg” or “Torretti”, with page numbers. “Holton” refers to Holton (1973). “Zahar” refers to Zahar (1989).
The discovery of the Lorentz distortions was complicated by the fact that there are other effects of absolute motion on material objects, besides those that are directly related to the Michelson-Morley experiment. These are the “first-order” effects of motion in space (which vary as v/c, rather than as v2/c2, or “second order” effects), such as the way telescopes must be inclined slightly in the direction of motion in order to intercept light from overhead stars (much as umbrellas must be inclined slightly forward in walking through rain to keep raindrops from hitting one’s body). First order effects (including the effects on the index of refraction) had previously been explained by the “ether drag” hypothesis (that the motion of material objects drags the ether along with them), but Lorentz abandoned it . Lorentz’s explanation of length contraction assumed that the ether is totally unaffected by the motion of material objects through it, and he had no explanation of such first order effects except to state transformation equations by which one could obtain the coordinates used on the moving object from those used at absolute rest. Goldberg, pp. 88-92; Torretti, pp. 41-45 | http://www.twow.net/ObjText/OtkCaLbStrB.htm | 13 |
166 | An Introduction to Air Density
Saturation Vapor Press Calculator
The Smithsonian reference tables (see ref 1) give the following values of saturated vapor pressure values at specified temperatures. Entering these known temperatures into the calculator will allow you to evaluate the accuracy of the calculated results.
|Deg C||Es, mb|
Armed with the value of the saturation vapor pressure, the next step is to determine the actual value of vapor pressure.
When calculating the vapor pressure, it is often more accurate to use the dew point temperature rather than the relative humidity. Although relative humidity can be used to determine the vapor pressure, the value of relative humidity is strongly affected by the ambient temperature, and is therefore constantly changing during the day as the air is heated and cooled.
In contrast, the value of the dew point is much more stable and is often nearly constant for a given air mass regardless of the normal daily temperature changes. Therefore, using the dew point as the measure of humidity allows for more stable and therefore potentially more accurate results.
Actual Vapor Pressure from the Dew Point:
To determine the actual vapor pressure, simply use the dew point as the value of T in equation 5 or 6. That is, at the dew point, Pv = Es.
(7a) Pv = Es at the dew point
where Pv= pressure of water vapor (partial pressure)
Es = saturation vapor pressure ( multiply mb by 100 to get Pascals)
Actual Vapor Pressure from Relative Humidity:
Relative humidity is defined as the ratio (expressed as a percentage) of the actual vapor pressure to the saturation vapor pressure at a given temperature.
To find the actual vapor pressure, simply multiply the saturation vapor pressure by the percentage and the result is the actual vapor pressure. For example, if the relative humidity is 40% and the temperature is 30 deg C, then the saturation vapor pressure is 42.43 mb and the actual vapor pressure is 40% of 42.43 mb, which is 16.97 mb.
(7b) Pv = RH * Es
pressure of water vapor (partial pressure)
RH = relative humidity (expressed as a decimal value)
Es = saturation vapor pressure ( multiply mb by 100 to get Pascals)
Dry Air Pressure:
Now that the water vapor pressure is known, we are nearly ready to calculate the density of the combination of dry air and water vapor as described in equation 4a, but first, we need to know the pressure of the dry air.
The total measured atmospheric pressure (also called actual pressure, absolute pressure, or station pressure) is the sum of the pressure of the dry air and the vapor pressure:
(8a) P = Pd + Pv
where: P = total pressure
Pd = pressure due to dry air
Pv = pressure due to water vapor
So, rearranging that equation:
(8b) Pd = P - Pv
where: P = total pressure
Pd = pressure due to dry air
Pv = pressure due to water vapor
Now that we have the pressure due to water vapor and also the pressure due to the dry air, we have all of the information that is required to calculate the air density using equation 4a.
Calculate the air density:
Now armed with those equations and the actual air pressure, the vapor pressure and the temperature, the density of the air can be calculated.
Here's a calculator that determines the air density from the actual pressure, dew point and air temperature using equations 4, 6, 7 and 8 as defined above:
Air Density Calculator
Moist Air is Less Dense...
As you may have noticed, moist air is less dense than dry air. It may seem reasonable to try to argue against that simple fact based on the observation that water is denser than dry air... which is certainly true, but irrelevant.
Solids, liquids and gasses each have their own unique laws, so it is not possible to equate the behavior of liquid water with the behavior of water vapor.
The ideal gas law says that a certain volume of air at a certain pressure has a certain number of molecules. That's just the way this world works, and that simple fact is expressed as the ideal gas law, which was shown above in equation 1.
Note that this is the gas law... not a liquid law, nor a solid law, but a gas law. Hence, any mental comparisons to the behavior of a liquid are of little help in understanding what is going on in the air, and are likely to simply result in greater confusion.
According to the ideal gas law, a cubic meter of air around you, wherever you are right now, has a certain number of molecules in it, and each of those molecules has a certain weight. The key to understanding air density changes due to moisture is grasping the idea that a given volume of air has only a certain number of molecules in it. That is, whenever a water vapor molecule is added to the air, it displaces some other molecule in that volume of air.
Most of the air is made up of nitrogen molecules N2 with a somewhat lesser amount of oxygen O2 molecules, and even lesser amounts of other molecules such as water vapor.
Since density is weight divided by volume, we need to consider the weight of each of the molecules in the air. Nitrogen has an atomic weight of 14, so an N2 molecule has a weight of 28. For oxygen, the atomic weight is 16, so an O2 molecule has a weight of 32.
Now along comes a water molecule, H2O. Hydrogen has an atomic weight of 1. So the molecule H20 has a weight of 18. Note that the water molecule is lighter in weight than either a nitrogen molecule (with a weight of 28) or an oxygen molecule (with a weight of 32).
Therefore, when a given volume of air, which always contains only a certain number of molecules, has some water molecules in it, it will weigh less than the same volume of air without any water molecules. That is, moist air is less dense than dry air.
Some examples of calculations using air density:
L = c1 * d * v2/2 * a
where: L = lift
c1 = lift coefficient
d = air density
v = velocity
a = wing area
From the lift equation, we see that the lift of a wing is directly proportional to the air density. So if a certain wing can lift, for example, 3000 pounds at sea level standard conditions where the density is 1.2250 kg/m3, then how much can the wing lift on a warm summer day in Denver when the air temperature is 95 deg (35 deg C), the actual pressure is 24.45 in-Hg (828 mb) and the dew point is 67 deg F (19.4 deg C)? The answer is about 2268 pounds.
Example 2) The engine manufacturer Rotax (see ref 6 ) advises that their carburetor main jet diameter should be adjusted according to the air density. Specifically, if the engine is jetted properly at air density d1, then for operation at air density d2 the new jet diameter j2 is given mathematically as:
j2 = j1 * (d2/d1) (1/4)
where: j2 = diameter of new jet
j1 = diameter of jet that was proper at density d1
d1 = density at which the original jet j1 was correct
d2 = the new air density
That is, Rotax says that the correct jet diameter should be sized according to the fourth root of the ratio of the air densities. (Note: according to Poiseuille's Law, the volumetric flow rate through a circular cross section is proportional to the fourth power of the diameter.)
For example, if the correct jet at sea level
standard conditions is a number 160 and the jet number is a measure of the
jet diameter, then what jet should be used for operations on the warm summer
day in Denver described in example 1 above? The ideal answer is a jet number
149, and in practice the closest available jet size is then selected.
Example 3) In the same service bulletin mentioned above, Rotax says that their engine horsepower will decrease in proportion to the air density.
hp2 = hp1 * (d2/d1)
where: hp2 = the new horsepower at density d2
hp1 = the old horsepower at density d1
If a Rotax engine was rated at 38 horsepower at sea level standard conditions, what is the available horsepower according to that formula when the engine is operated at a temperature of 30 deg C, a pressure of 925 mb and a dew point of 25 deg C? The answer is approximately 32 horsepower. (See also details on the SAE method of correcting horsepower.)
Importance of Air Density:
So far, we've been discussing real physical attributes which can be precisely measured, with air density being the weight per unit volume of an air mass. The air density, as shown in the previous examples, affects the lift of a wing, the fuel required by an engine, and the power produced by an engine. When precision is required, air density is a much better measure than density altitude.
Air density is a physical quality which can be accurately measured and verified. On the other hand, density altitude is a rather conceptual quantity which depends upon a hypothetical "standard atmosphere" which may or may not accurately correspond to the actual physical conditions at any given location. Nonetheless, density altitude has a long heritage and remains a common (although rather hypothetical) representation of air density.
Back on the trail of Density Altitude...
The definition of density altitude is the altitude at which the density of the 1976 International Standard Atmosphere is the same as the density of the air being evaluated. So, now that we know how to determine the air density, we can solve for the altitude in the International Standard Atmosphere that has the same value of density.
The 1976 International Standard Atmosphere (ISA) is a mathematical description of a theoretical atmospheric column of air which uses the following constants (see ref 16):
Po = 101325 sea level standard pressure, Pa
To = 288.15 sea level standard temperature, deg K
g = 9.80665 gravitational constant, m/sec2
L = 6.5 temperature lapse rate, deg K/km
R = 8.31432 gas constant, J/ mol*deg K
M = 28.9644 molecular weight of dry air, gm/mol
In the ISA, the lowest region is the troposphere which extends from sea level up to 11 km (about 36,000 ft), and the model which will be developed here is only valid in the troposphere.
The following equations describe temperature, pressure and density of the air in the ISA troposphere:
(9) (see ISA pg 10, Eqn 23)
(10) (see ISA pg 12, Eqn 33a)
(11) (see ISA pg 15, Eqn 42)
where: T = ISA temperature in deg K
P = ISA pressure in Pa
D = ISA density in kg/m3
H = ISA geopotential altitude in km
One way to determine the altitude at which a certain density occurs is to rewrite the equations and solve for the variable H, which is the geopotential altitude.
So, it is now necessary to rewrite equations
9, 10, and 11 in a manner which expresses altitude H as a function of density
D. After a bit of gnashing of teeth and general turmoil using algebraic
substitutions of those three equations, the exact solution
for H as a function of D, may be written as:
Using the numerical values of the ISA constants, that expression may be evaluated as:
where H = geopotential altitude, km
D = air density, kg/m3
Now that H is known as a function of D, it is easy to solve for the Density Altitude of any specified air density.
It is interesting to note that equations 9, 10 and 11 could also be evaluated to find H as a function of P as follows:
where H = geopotential altitude, km
P = actual air pressure, Pascals
Now that we can determine the altitude for a given density, it may be useful to consider some of the definitions of altitude.
Different Flavors of Altitude:
There are three commonly used varieties of altitude (see ref 4). They are: Geometric altitude, Geopotential altitude and Pressure altitude.
Geometric altitude is what you would measure with a tape measure, while the Geopotential altitude is a mathematical description based on the potential energy of an object in the earth's gravity. Pressure altitude is what an altimeter displays when set to 29.92.
The ISA equations use geopotential altitude, because that makes the equations much simpler and more manageable. To convert the result from the geopotential altitude H to the geometric altitude Z, the following formula may be used:
where E = 6356.766 km, the radius of the earth (for 1976 ISA)
H = geopotential altitude, km
Z = geometric altitude, km
Density Altitude Calculator:
The following calculator uses equation 12 to convert an input value of air density to the corresponding altitude in the 1976 International Standard Atmosphere. Then, the results are displayed as both geopotential altitude and geometric altitude, which are very nearly identical at lower altitudes.
Note that since these equations are designed to model the troposphere, this calculator will give an error message if the calculated value of altitude is beyond the bounds of the troposphere, which extends from sea level up to a geopotential altitude of 11 km.
Density Altitude Calculator 1
Here's a calculator that uses the actual pressure, air temperature and dew point to calculate the air density as well as the corresponding density altitude:
Density Altitude calculations using Virtual Temperature:
As an alternative to the use of equations which describe the atmosphere as being made up of a combination of dry air and water vapor, it is possible to define a virtual temperature for an atmosphere of only dry air.
The virtual temperature is the temperature that dry air would have if its pressure and specific volume were equal to those of a given sample of moist air. It's often easier to use virtual temperature in place of the actual temperature to account for the effect of water vapor while continuing to use the gas constant for dry air.
The results should be exactly the same as in the previous method, this is just an alternative method.
There are two steps in this scheme: first calculate the virtual temperature and then use that temperature in the corresponding altitude equation.
The equation for virtual temperature may be derived by manipulation of the density equation that was presented earlier as equation 4a:
Recalling that P = Pd + Pv, which means that Pd = P - Pv, the equation may be rewritten as
Finally, a new temperature Tv, the virtual temperature, is defined such that
By evaluating the numerical values of the constants, setting Pv = E, noting that Rd = R*1000/Md and that Rv=R*1000/Mv, then the virtual temperature may be expressed as:
where Tv = virtual temperature, deg K
T = ambient temperature, deg K
c1 = ( 1 - (Mv / Md ) ) = 0.37800
E = vapor pressure, mb
P = actual (station) pressure, mb
where Md is molecular weight of dry air = 28.9644
Mv is molecular weight of water = 18.016
(Note that for convenience, the units in Equation 14 are not purely SI units, but rather are US customary units for the vapor pressure and station pressure.)
The following calculator uses equation 6 to find the vapor pressure, then calculates the virtual temperature using equation 14:
The virtual temperature Tv may used in the following formula to calculate the density altitude. This formula is simply a rearrangement of equations 9, 10 and 11:
Using the numerical values of the ISA constants, equation 15 may be rewritten using the virtual temperature as:
where H = geopotential density altitude, km
Tv = virtual temperature, deg K
P = actual (station) pressure, Pascals
Using the Altimeter Setting:
When the actual pressure is not known, the altimeter reading may be used to determine the actual pressure. (For more information about ambient air pressure measurements see the pressure measurement page.)
The altimeter setting is the value in the Kollsman window of an altimeter when the altimeter is adjusted to read the correct altitude. The altimeter setting is generally included in National Weather Service reports, and can be used to determine the actual pressure using the following equations:
According to NWS ASOS documentation, the actual pressure Pa is
related to the altimeter setting AS by the following equation:
By numerically evaluating the constants and converting to customary units of altitude and pressure, the equation may be written as:
Pa = [ASk1 - ( k2 * H ) ]1/k1
where Pa = actual (station) pressure, mb
AS = altimeter setting, mb
H = geopotential station elevation, m
k1 = 0.190263
k2 = 8.417286*10-5
When converted to English units, this is the relationship between station pressure and altimeter setting that is used by the National Weather Service ASOS weather stations (see ref 10 ) as:
Pa = [AS0.1903 - (1.313 x 10-5) x H]5.255
where Pa = actual (station) pressure, inches Hg
AS = altimeter setting, inches Hg
H = station elevation, feet
(Note: several other equations for converting actual pressure to altimeter setting are given in ref 12.)
Using these equations, the altimeter setting may be readily converted to actual pressure, then by using the actual pressure along with the temperature and dew point, the local air density may be calculated, and finally the density may be used to determine the corresponding density altitude.
Given the values of the altimeter setting (the value in the Kollsman window) and the altimeter reading (the geometric altitude), the following calculator will convert the altitude to geopotential altitude, and solve equation 16 for the actual pressure at that altitude.
Altimeter Values to Actual Pressure
Using National Weather Service Barometric Pressure:
Now you're probably wondering about converting sea-level corrected barometric pressure, as reported in a weather forecast, to actual air pressure for use in calculating density altitude. Well the good news is that yes, sea level barometric pressure can be converted to actual air pressure. The bad news is that the result may not be very accurate.
If you want accurate density or density altitude calculations, you really need to know the actual air pressure.
In order to compare surface pressures from various parts of the country, the National Weather Service converts the actual air pressure reading into a sea level corrected barometric pressure. In that way, the common reference to sea level pressure readings allows surface features such as pressure changes to be more easily understood.
But, unfortunately, there really is no fool-proof way to convert the actual air pressure to a sea level corrected value. There are a number of such algorithms currently in use, but they all suffer from various problems that can occasionally cause inaccurate results (see ref 7).
It has been estimated that the errors in the sea level pressure reading (in mb) may be on the order of 1.5 times the temperature error for a station like Denver at 1640 meters. So, if the temperature error was 10 deg C, then the sea level pressure conversion might occasionally be in error by 15 mb. At the very highest airports such as Leadville, Colorado at an elevation of 3026 meters (9927 ft), perhaps the error might be on the order of 30 mb.
And further complicating matters, without knowing the details of the algorithm that was used to calculate the sea level pressure, it is likely that there will be some additional error introduced in the process of converting the sea level pressure back to the desired actual station pressure.
These error estimates are probably on the extreme side, but it seems reasonable to say that the density altitude calculations made using the National Weather Service sea level pressure calculations may have an uncertainty of ±10% or more.
When using pressure data from the National
Weather Service, be certain to find out if the pressure is the altimeter
setting or the sea-level corrected pressure. They may be quite different in
If you really want to know the actual density altitude, it will need to be calculated in the general manner that has been described above. However, there are simple approximations which have been developed over the years.
For example, a particularly convenient form of density altitude approximation is obtained by simply ignoring the actual moisture content in the air. Here is such an equation which has been used by the National Weather Service (see ref 13) to calculate the approximate density altitude without any need to know the humidity, dew point or vapor pressure:
where: DA = density altitude, feet
Pa = actual pressure (station pressure), inches Hg
Tr = temperature, deg R (deg F + 459.67)
This simplified equation (17) is, basically, just equation (12) rewritten in US customary units with no pressure contribution due to water vapor pressure.
The following calculator can be used to compare the results of
the accurate calculations (in geometric altitude, as described earlier on this web page) with
the results from the preceding simplified equation:
The results for dry air (very low dew point) are nearly identical, while the greatest errors in the simplified equation are when there is a lot of water vapor in the air, i.e. high temperature accompanied by a high dew point.
To explore the effects of water vapor, consider, for example, a hypothetical ambient temperature of 95 deg F, with a dew point of 95 deg, at an altitude of 5050 feet and an altimeter setting of 29.45 , the actual air pressure would be 24.445 in-Hg and the actual Density Altitude would be 9753 feet, while the simplified equation gives a result of 8933 feet.... an error of 820 feet. The actual air density in this case would be reduced by about 3%, compared to dry air.
Or, for a hypothetical 95 deg F foggy day at sea level, with a dew point of 95 deg F and an altimeter setting of 29.92, the actual density altitude is 2988 ft, while the simplified equation gives a result of 2294 ft... an error of 694 ft. Similar to the previous example, the actual air density in this would be reduced by about 3%, compared to dry air.
Those examples are quite extreme, but in actual practice it is quite common to see errors on the order of 200 to 400 ft along the sea coast and in the sweltering mid-west, which may be inconsequential, or may be significant, depending upon your specific situation.
So, if you don't mind some error when the air has a lot of water vapor, then the simplified equation, which is much easier to calculate, may suit your needs.
But if you really want the utmost accuracy in determining the density altitude, then you'll have to deal with the gory details of vapor pressure and compute the "real" density altitude.
Based on the reported observations from a variety of US airports, it appears that the ASOS and AWOS-3 automated weather observation systems (which report weather conditions including density altitude at many airports in the US) use a simplified equation which gives essentially the same results as equation 17 above. That is, it appears that the current ASOS/AWOS density altitude does not account for effects of moisture in the air.
You can compare the actual Density Altitude with the ASOS/AWOS-3 reported values using the calculator at: Density Altitude Calculator - with selectable units.
However, before you get too distressed by such seemingly "sloppy" ASOS/AWOS calculations, keep in mind that the International Standard Atmosphere is merely a conceptual model which may or may not accurately represent the conditions at any given location on any given day. That is, "density altitude" and "standard atmosphere" are theoretical concepts which are based upon a number of assumptions about the atmosphere, and may or may not accurately depict the actual physical conditions at any actual location, no matter how accurate the calculations may be.
Actually, it would be far more meaningful, useful and precise if ASOS/AWOS reported the actual air density in kg/m3, and if the performance data in pilot's handbooks was also expressed in terms of actual air density in kg/m3. But that's not what is currently done. Currently, data in terms of "altitude" and "density altitude" are generally what we're given. That's a pity.
Hopefully, someday all of the aircraft performance tables/charts and weather reporting systems will be expressed in terms of the actual air density and thereby avoid this arcane concept of density altitude... but, for now, we're stuck with "density altitude".
If we really want to be precise and consistent, we should be using the actual air density, not this theoretical quantity called density altitude.
For those who want to do their own density altitude calculations, here's a list of the steps performed by my on-line Density Altitude Calculator :
1. convert ambient temperature to deg C,
2. convert geometric (survey) altitude to geopotential altitude in meters,
3. convert dew point to deg C,
4. convert altimeter setting to mb.
5. calculate the saturation vapor pressure, given the ambient temperature
6. calculate the actual vapor pressure given the dew point temperature
7. use geopotential altitude and altimeter setting to calculate the absolute pressure in mb,
8. use absolute pressure, vapor pressure and temp to calculate air density in kg/m3,
9. use the density to find the ISA altitude in meters which has that same density,
10. convert the ISA geopotential altitude to geometric altitude in meters,
11. convert the geometric altitude into the desired units and display the results.
My On-Line Density Altitude and Engine Tuner's Calculators:
Click here for Engine Tuner's Calculator which includes air density, density altitude, relative horsepower, virtual temperature, absolute pressure, vapor pressure, relative humidity and dyno correction factor.
1. List, R.J. (editor), 1958, Smithsonian Meteorological Tables, Smithsonian Institute, Washington, D.C.
2. Thermodynamic subroutines by Schlatter and Baker .... lots of Fortran algorithms and excellent references
3. El Paso National Weather Service ... weather related formulas
4. http://mtp.mjmahoney.net/www/notes/altitude/altitude.html ... different flavors of altitude explained
8. http://www.digitaldutch.com/unitconverter/index.htm ... conversion factors
9. http://physics.nist.gov/Pubs/SP811/appenB8.html ... SI conversion factors from NIST
11. http://atmos.nmsu.edu/education_and_outreach/encyclopedia/sat_vapor_pressure.htm ... NASA vapor pressure
12. There are some additional altimeter setting algorithms at http://www.wmo.ch/pages/prog/www/IMOP/publications/IOM-19-Synoptic-AWS.pdf and http://www.srh.noaa.gov/images/epz/wxcalc/altimeterSetting.pdf Also see http://www.softwx.com/weather/uwxutils.html for weather equations, including additional methods for converting station pressure to altimeter setting.
14. For more details about the effects of non-ideal compressibility and vapor pressure not measured over liquid water, see Techniques and Topics in Flow Measurement, Frank E. Jones, p37 and also Comité International des Poids et Mesures CIPM-2007 (or CIPM-81/89). PDF file of Revised Formula (CIPM-2007) To convert the CIPM-2007 density to the forms given in my equation 4a and 4b, note that Xv = RH * f * Psv/P, with RH = Pv/Psv. Let f = 1, which then gives Xv = Pv/P. Then let Z=1, and simply rearrange the equation to yield the forms given in my 4a and 4b.
Some related web links:
http://www.luizmonteiro.com ... a large collection of aviation related calculators
http://atmos.nmsu.edu/education_and_outreach/encyclopedia/humidity.htm ... humidity equations
http://www.digitaldutch.com/atmoscalc/ ... ISA calculator on-line
http://www.grc.nasa.gov/WWW/K-12/airplane/short.html ... index of education materials from NASA
El Paso NWS - calculators ... atmospheric calculators using Tim Brice's cgi scripts
http://www.grc.nasa.gov/WWW/K-12/airplane/foil3.html ... NASA airfoil simulator... fun tool
http://www.usatoday.com/weather/wdenalt.htm ... lots of pages of weather
related info and formulas
http://hurri.kean.edu/~yoh/calculations/satvap/satvap.html ... saturation equations plus calculator
http://hurri.kean.edu/~yoh/calculations/moisture/Equations/moist.html ... moisture calculations
http://www.weathergraphics.com/ ... low-cost software for personal weather analysis
Copyright 1998-2012, All Rights Reserved, Richard Shelquist, Shelquist Engineering. | http://wahiduddin.net/calc/density_altitude.htm | 13 |
162 | K12 Electromagnetism and magnetism
Interaction between a magnet and a conductor through which flows a current
The thermal and chemical effects of an electric current occur in the track of the current. Consider next actions of currents away from it. Just as electricity at rest has its electric field, so does the moving one; the environment of a conductor carrying a current exercises apparent distant action: Especially on nearby magnetic needles. A magnetic needle has at every point on Earth's surface a definite direction (in a definite magnetic meridian) and, if you force it to turn elsewhere and then release it, it will always return to its initial direction. Apparently a force (Earth's magnetism) keeps the needle within the magnetic meridian; it requires work, to deflect it from it. Work of this kind can be performed by an electric current: It diverts the magnetic needle (first discovered by Oerstedt 1820) and thereby performs what only a magnet can do - in other words: Flowing electricity exerts magnetic forces. For the direction in which the current turns the needle you have the rule: Imagine you swim in the conductor, with the current, with your head forwards, the face turned to the needle; then the North pole of the deflected needle will be turned towards your left hand (swimming rule of Ampère). If you make an adjustment for Earth's magnetism by bringing into the neighbourhood of the needle a magnet, directed in a definite manner, it will obey unimpeded the diverting force of the current. If you carry the needle around the conductor, it will always place itself perpendicularly to it, the North pole ahead, as described by the swimming rule. Fig. 533 displays this for a conductor, perpendicular to the plane of the drawing.
Thus, the North pole of the needle (arrow head) experiences a force to circulate the conductor; so does the South pole, but in the opposite direction to that of the North pole. If the conductor is very flexible, say a longer narrow strip of tinsel, hanging beside a vertical magnetic rod, the strip will wind itself around the magnet, when the current is closed, the ends of the strip in opposite directions corresponding to the opposite poles. Thus, every pole acts by itself on the conductor through which flows a current and not only, because is is connected to the opposite pole. Corresponding to the law of the equality of action and reaction, the magnet must also act on the current. The pole tends to make the conductor circulate around it just as the conductor has done with it: In the set-up of Fig. 534 b, the magnet is fixed and the conductor can move (Faraday). If you close the current, the conductor describes about the pole of the magnet the mantle of a cone. In the arrangement of Fig. 534 a, where M is the magnet and S and the mercury form the circuit, the North pole circulates - in the direction corresponding to the swim rule about S.
Interaction between conductors with flowing current
A full understanding of the interaction between a magnet and electric currents is only reached when we examine the mutual interaction of two currents (Ampère 1820). Electric currents attract or repel (electro-dynamically) each other depending on their relative directions (Fig. 535); parallel currents of equal direction attract, anti-parallel currents repel each other. The frame, referred to as Ampère's frame (Fig. 537), demonstrates this well: The frame BC can rotate in the bearings a and c has current flow, as is indicated by the arrows. A second frame MN, also with current flowing through it, is placed parallel to it nearby and fixed. If you rotate the movable frame so that B and M approach each other, you observe repulsion, between C and N attraction.
Fig. 538 demonstrates that currents in the same direction attract each other: A very thin, elastic, vertically hanging screw spring, through which flows current and the lower end of which - with a weight extending the spring - dips freely movable into mercury. The spring and the mercury form a circuit. In all the windings of the spring the direction of the current is the same, whence they attract each other, the spring shortens. pulls in spite of the weight its movable end upwards out of the mercury and breaks the circuit. Now the spiral spring follows the pull of the weight, dips again into the mercury and thus the cycle repeats itself.
Active conductors, which cross one another (Fig. 536), attract each other, if both move towards the intersection or move away from it. If one moves there, the other moves away, they repel each other.
The electro-dynamometer for the measurement of current intensities depends on the interaction of current carrying conductors (Fig. 539). V is a fixed frame with wire wound around it, W a similar frame which can rotate in the bearings B. Both frames form the circuit 2VBWC1. The frame W stands without current, acted upon by a torsion spring F. at right angle to the frame V. When current flows through it, it tends to rotate so that the conductors through which flows current in the same direction lie next to each other. This rotation twists the spring F more or less. A position of rest is reached, when the tension of the spring is in equilibrium with the electro-dynamic action between the frames. The pointer z, linked to W, indicates the current intensity on an empirically calibrated scale. The direction of the pointer's movement remains the same at a change in the direction of the current, whence these instruments are suitable for employment with alternating currents (Siemens).
A movable active conductor can also move along a fixed active conductor (Fig. 540). The parts of the conductor to the right of m must repel the conductor ab, those to the left of m attract it, whence it must move along cd towards c. Hence the movable bow b (Fig. 541) rotates, that is, it pushes itself along the fixed conductor, which surrounds the mercury vessel.
Solenoid. Electromagnet Thus, the surroundings of an electric current act like those of a magnet. This becomes especially clear when the conductor forms a solenoid (Greek: swlhn = pipe), a screwed wire, the windings of which support each other's action (Fig. 542). A solenoid, the axis of the windings of which can rotate about a vertical axis (Fig. 543), locates itself like a magnet needle, the plane of the windings at right angle to the magnetic meridian. Its axis corresponds to that of a magnet needle, the South pole B lying to the side, seen from which the current flows clockwise around the axis. - Two solenoids interact like two magnets, equally named ends repel, differently named ends attract each other - this is easily understood, because, if two solenoids lie with the equally (differently) named ends A and A' together (Fig. 543), then parts of the conductors lie side by side, in which the currents are directed opposite (in the same direction), that is, repel (attract) each other. A solenoid behaves with respect to a magnet like a magnet with respect to another magnet; the North pole of the magnet attracts the South pole of the solenoid and repels the North pole of the solenoid. Ampère concluded from this that the magnet and the solenoid are similar. According to his hypothesis, the magnetic field of magnetized bodies, should be generated by currents inside molecules - Ampère's molecular currents. Their existence can be proved by experiments (Einstein, Haas 1915), which let one understand that a not magnetic iron rod, around which you feed a solenoid current (Fig. 544), becomes a ,magnet - an electro-magnet; the solenoid current makes them parallel and directs them in the same direction, whence they generate the external magnetic field.
The electric magnet is employed in innumerable appliances, which are switched on from an arbitrarily distant location (through a circuit), and can perform mechanical work. For example, you have among them the regulation of clocks by a standard clock in an astronomical observatory, operation of a warning sound system along a railways track or activation of an electric telegraph writer. In each case, activation of a magnet performs the work: Here the activated mechanism adjusts a clock, there a hammer hits periodically a bell, there a typewriter is turned on and operated.
Until 1900, the most used telegraph apparatus was that of Samuel Finley Breese Morse 1791-1872 1844 (Fig. 545). E is the electromagnet. As long and as often as it is activated, it attracts the anchor A and presses by the angled lever CID the strip of paper hp against the roller c, covered with ink. Depending on the duration of its activation, it draws points or dashes, which form the Morse alphabet; for example, - = a, -··· = b, etc. Its place was taken gradually by the telegraph type writer of David Edward Hughes 1831-1900. Submarine telegraphy employs the very sensitive galvanometer of W.Thomson, in which a fine glass tube is deflected to the right or left, the ink is sprayed on the paper strip (Fig. 546); more recently, also a typewriter is being used.
Electro-magnets are artificial magnets. There exist also natural ones - magnetite, an iron ore with the known magnetic properties of attracting iron and steel; however, only the fact of their mere existence is of interest here. Their magnetic effects disappear in comparison with the forces of artificial magnets. since artificial magnets can be very strong; an electro-magnet which can lift an adult human being has only moderate dimensions. Pieces of iron, attracted by a magnet, become themselves magnetic, if they are in contact with it, and even more so, if you move several times a magnet over their surface in the same direction: They themselves become artificial magnets.
A bar magnet is most strongly magnetic at its ends, less so towards its centre and not at all at the centre. This is demonstrated, for example, by its appearance, when it is covered with iron dust. The longer it is compared with its cross-section, that is, the more it approaches the form of a needle, the more seems the maximum of its ability to attract iron concentrated at two points - its poles - (in the case of bar magnets, at a distance of about 1/12 of its length). The line, on which the poles lie, is called the axis of the magnet. It you support a needle formed magnet (Fig. 548), so that its axis can turn freely in the horizontal plane, it comes to rest in a certain direction. If you force it out of this position and release it, it will resume its initial direction. If you bring it into this direction, but in such a manner that the initially forward end is at its back, it will rotate by 180º and resume its initial position. This direction is approximately North-South. This is the reason why you call the pole which seeks North, the North pole, the other the South pole. The turning force, which returns the needle to its initial position, arises from the action of Earth's magnetism. You can confirm the fact that only turning forces are active by letting a magnet swim on water in a vessel(Fig. 549); it will be turned into the magnetic meridian, but not be attracted to the edge of the vessel. Due to its direction seeking property, magnets serve as compass - needles which are supported on a pin and are able to rotate in the horizontal plane. (Fig. 548).
For use in ships, the circular subdivision of the circle is fixed to the magnetic needle; the ship moves around it and the compass container has a mark corresponding to the ship's keel. In order to remove the effect of the ship's oscillations, the compass is in Cardan suspension. In the compass used on land for direction finding, the needle moves in a circular container, the rim of which has a scale.
Fundamental law of the force of interaction between two magnets
North- and South poles of magnets are related to each other in some sense as positive and negative electricity. If you place two magnet needles side by side, equally named poles repel (North pole the North pole, South pole the South pole), but the North pole attracts the South pole, etc.). Just recall what was said earlier about the interaction between equally and unequally named electricities; imagine the electricity replaced by magnetism and North magnetism replaced by positive, South magnetism by negative electricity. You will the immediately understand the fundamental law (Coulomb): K = m1·m2/r². This law states: Two poles with magnetic quantities m1 and m2 interact with a force K, which is directly proportional to the amounts of magnetism and inversely proportional to the square of the distance between them - repulsion or attraction depending on the signs of their magnetism. The unit of amounts of magnetism will be found as follows: Let there be given two equally strong poles 1 cm apart from each other; if they interact with the force 1, that is, 1 dyn, we ascribe to both of them unit magnetism (unit pole). At any location, the magnetic force is measured in terms of the number of force units (dyn) by which they act on a unit pole placed there. If m is the amount of magnetism at each of the poles of a bar magnet, l the distance between the poles, then its magnetic moment is m·l; it corresponds to the moment of a couple.
Naturally, in order to confirm Coulomb's law, you cannot create two single poles just as you cannot create two single, electrically charged bodies. However, you can attain almost the same objective with two very long bar magnets (Coulomb). Their magnetism is concentrated at their extreme ends, the remaining length being next to indifferent. For example, you can let the South poles of two magnets approach each other and investigate their interaction without, due to their lengths, the North poles disturbing the experiment.
Magnetic lines of force
The fundamental law of the force acting between two magnet poles has exactly the same form as the corresponding law for electric charges. By considerations, analogous to the ones just referred to, we arrive also here at the magnetic potential, the level surface of the magnetic potential and above all at the magnetic lines of force (Faraday). All the lines of force characterize the magnetic field - our next topic.
If you place a small magnetic pole as testing body somewhere in a field, a certain force acts on it: Its direction is visualized by the direction, its magnitude by the density of the lines of force at any location in the field. Lines of force are only products of our imagination, nevertheless you can display them. Let NS be a bar magnet; place a small magnet needle near it in the plane of Fig. 550a. The poles of the magnet and of the magnet needle interact and the needle positions itself in the direction of the resultant force, which acts on it; this direction is that of the line of force at that location. If the entire field is occupied by magnet needles, we get a complete view of the lines of force, (We disregard here the interaction of the magnetic needles on each other!)
In order to display the lines of force, you use iron dust; under the influence of the magnet, they themselves become magnetic and arrange themselves just like the magnet needles. For example, place a sheet of paper on top of the magnet, cover it with iron dust and tap the paper softly.
You must take into consideration that such lines of force enter space in all directions (Fig. 550b), that is, that Fig. 550a only tells us about their layout in the plane of the drawing.
Magnetic field strength, measured by the density of the lines of force
Fig. 550a shows that close to the magnet, that is, where the force is greatest, the lines of force are closest together; further away, where the force is weaker, they are wider apart. The density of the lines of force at a location in the field can serve as a measure for the local force; we understand by density the number of lines of force, which pass through a 1 cm² cross-section, perpendicularly to the direction of the lines of force. (Do not view these lines of force as real objects! The force around a pole of a magnet is always uniformly - more strictly: continuously - distributed and by no means concentrated on lines. Only the clarity, associated with the image of lines of force, justifies us to imagine the continuous field of force replaced by the unsteady field of lines.)
We can express field strength at a given location with the aid of the number of lines of force: Around a unit pole, at a distance of 1 cm, you have everywhere the force 1 dyn. The spherical surface of radius 1 cm about the unit pole has the area 4p·1²=4p cm². If we now subdivide the lines of force, radiating from the unit pole, into 4p bundles (Faraday's induction tubes), then every cm² at a distance of 1 cm is hit by one bundle of lines of force. In order to simplify the picture, imagine every individual bundle replaced by a line of force running along its centre*. At a distance of 1 cm from the unit pole, there then passes exactly one line of force through an area of 1 cm², perpendicular to its direction. Hence 4p·m lines radiate from a pole of strength m. At a distance of 1 cm from this pole pass then m lines through 1 cm² in agreement with Coulomb's law, according to which there acts at this place a force of m dyn. If we now imagine at the distance of 2 cm another spherical surface around the pole, it has the area 4p·2² cm². Through every cm² of this surface now pass of the 4p·m lines only 4p·m/4p·2² = m/4 lines, that is, the density of lines at the distance of 2 cm has sunk to 1/4 or, more general, at the distance r to 1/r². However, according to Coulomb's law, the force has decreased in the same ratio.
* By 4p = 12.56 lines of force, you must, of course, understand 1256 lines per 100 cm².
We can now make the following statement: If from a pole of strength m radiate 4p m lines of force, then everywhere in the field the force is numerically equal to the number of lines of force, which pass there through 1 cm² at right angle to the lines of force. The density of the lines of force thus becomes a measure of field strength. The unit of field strength is called 1 Gauß. The statement "a field of 100 Gauß" then says that at this location pass at right angle through 1 cm² 100 lines of force or that a force of 100 dyn acts on the unit pole.
If the strength of a pole m = 1000 units of magnetism and r = 10 cm, then the number of lines of force radiating from this pole is n = 4p·m = 4·3.14·1000 = 12560, whence the density of lines of force B at the distance 10 cm is
B = n/4p·r² = 12560/1256 = 10,
that is, 10 lines meet 1 cm² at that location in the field. Just so is there the force according to Coulomb's law:
m1·m2/r² = 1000·1/10² = 10 dyn.
Hence the field has at 10 cm distance the strength of 10 Gauß.
All these consideration only refer to the field of a point-like pole. However, it can be extended to magnets of arbitrary shape and arbitrary fields. In particular, if the lines of force are parallel and at equal distances, the field is said to be homogenous. For example, you can consider Earth's field to be homogeneous within the spaces of practical measurements. The horizontal component of Earth's magnetic field, which has an important role in many measurements, in Central Europe, is 0.2 Gauß (in Berlin 0.18). By means of electro-magnets Kapitza 1894-1984 1927 achieved fields of about 320,000 Gauß (in a space of 2 cm³).
Terrestrial magnetism. Its elements (declination, inclination, horizontal intensity)
A magnetic needle does not point exactly to North; it deviates by a few degrees - the declination angle, from the geographical meridian, at some locations to the West (in 1935 in Berlin by about 9º), at other locations to the East. The plane through Earth's centre and the direction of the needle is called the magnetic meridian. The point on Earth's surface, to which magnet needles point with their North pole, lies in the arctic part of North America (69º 18' N, 95º 27' W), the corresponding one on the southern hemisphere in the South Sea to the South of Australia (72º 25' S, 154º E). These two points are called Earth's magnetic poles.
If you hang a magnet needle in the magnetic meridian and let it turn about a horizontal axis through its centre of gravity, its magnetic axis ab forms with the horizon an acute angle (Fig. 551); on the Northern hemisphere, the North pole, on the Southern hemisphere the South pole points downwards. The acute angle between the downwards inclined part a of the magnetic axis and the horizontal plane is called the angle of inclination. It was about 66º in Berlin in 1935. Like the declination, it changes with the time. Near the magnetic poles of Earth, the needle is vertical: The inclination is 90º.
The angles of declination and inclination at a location on Earth's surface yield the direction of the local force of Earth's magnetism. In this direction, the magnetism attracts the one pole with the same strength as it repels the other pole. The strength with which it acts there on the magnet with unit moment is called total intensity (T). If you decompose T into three mutually perpendicular components - one vertically downwards, the other two in the horizontal plane South-North and West-East - the first determines the vertical intensity, the other two together the horizontal intensity H. Declination, inclination and horizontal intensity are referred to as the elements of Earth's magnetism.
Earth's magnetism in Central Europe for 1910, 0. (German Seewarte)
Horizontal intensity in Gauß
Mean annual change: + 0.00014 - 0.00034 CGS
Mean annual change: -0.07º
|East of Greenwich||45º||46º||47º||48º||49º||50º||51º||52º||53º||54º||55º|
Mean annual change: - 0.02 - 0.05º
If you link on a chart two neigbouring points, at which one of these elements has the same magnitude, for example, the horizontal intensity is 0.2 Gauß, you obtain certain curves (iso-magnetic lines) which cover Earth's entire surface. The most important ones are: Lines of equal declination (isogones), of equal inclination (isoclines), of equal total intensity (isodynamens) and of equal horizontal intensity (horizontal isodynamen) (Fig. 552).
The numerical values of the terrestrial magnetic elements, obtained for thousands of locations, yield the conclusion that Earth can be considered to be a magnet, the axis of which is inclined to the axis of rotation by 12º. There also exists a magnetic equator along which the inclination vanishes. The regions of Western and Eastern declination are separated by the isogones of 0º - agones; there were in 1935 two of them. The numerical values of the terrestrial magnetic elements are not constant, but oscillate in time steadily (secularly, annually, even daily): At exceptional occasions, they change abruptly - like a storm (magnetic thunderstorm) - coinciding with such events on the Sun; Earth currents and polar lights are linked to Earth's magnetism. The study of the elements of terrestrial magnetism is mainly linked to the names of Humboldt, Gauß , von Neumayer and L.A. Bauer. The first proposed the setting up of magnetic observatories, the second looked after the accuracy of terrestrial magnetic measurements, the third made terrestrial magnetism an indispensable part of scientific expeditions; the fourth measured magnetism in the service of the Carnegie Institute for 20 years on a ship without iron parts on the oceans, filled gaps in many magnetic charts and showed that many previous measurements were at fault.
Absolute (Earth magnetic) measure of current intensity
You can measure the Earth magnetic field in absolute units (c, g, s), whence the field due to a current can also be measured by a comparison with the terrestrial magnetic field. In this way, it must be possible to obtain an absolute unit of current strength (apart from the technical units, defined with the silver Volta meter.)
An instrument for absolute current measurements is shown in Fig. 553; it is a circular conductor in a vertical plane, which surrounds a very short, in the horizontal plane rotatable magnet needle. The vertical axis of rotation of the needle coincides with the vertical diameter of the circle and the plane of its rotation with the horizontal plane through the circle's centre. You place the circular plane of the conductor at the magnetic meridian. As long as no current flows through the circuit, the needle locates itself on the horizontal diameter of the circle. However, if in addition to Earth's field a current is applied, the needle turns in the direction of the resultant force, due to the simultaneous action of the terrestrial field and the circuit. Earth's field only acts with its horizontal component H. It acts on the unit pole with H dyn, hence on each of the poles of the needle with the magnetic quantity m with m·H dyn. After the needle NS has turned by the angle a out of the magnetic meridian into the position N'S', the lever arm (Fig. 554), at which it acts, is p, that is, l/2·sina (setting the distance of the pole = l; in the position N"S", this arm would be = l/2), whence the turning moment at the needle due to Earth is mH·lsina. The expression ml is the magnetic moment of the needle.
We now turn to the magnetic force of the current. We compute it on the basis of the law of Biot and Félix Savart 1791-1841, established by means of many experiments, which yields the force exerted by a short, current carrying conductor - a circuit element - on a magnet pole at an arbitrary distance from it. Let l denote a short piece of a conductor (Fig. 555), i the local current intensity, m a magnet pole with amount m of magnetism at the distance L from l, and j the angle between the directions of l and L. The force, exerted by l on m is then proportional to sinj · i· m·l/L². If L is perpendicular to l, that is, sin j = 1, the force is proportional to i· m·l/L². It wants to take m (Fig. 533) along a circle about l, that is, it acts perpendicularly to the direction L. The lower Fig. 555 displays l perpendicular to the plane of the drawing, the arrow indicates the direction in which the current acts on the pole. We are here only interested in the action of a circular current on a pole at the centre of the circle. Let r be the radius of the circle, i the current intensity, m the amount of magnetism of the pole. The force exerted by the current on the pole is proportional to i·m·2p r/r², since l = 2p r and L = r, that is, it is proportional to i·m·2p /r. If the pole is a unit pole - we recall that we wanted to measure by the action on it the field strength - than the strength at the centre is proportional to 2p · i/r. In other words, the field intensity increases in the same ratio in which the current intensity i increases and in which r is reduced, that is, the smaller is the circle carrying the current around the pole.
This relationship led Weber to define the absolute unit of current intensity. Imagine a circle of radius 1 cm, on it an arc of 1 cm length and at the centre of the circle a unit pole. Weber calls unit current that current, which is so strong that the piece of the conductor exercises on the pole the unit force or - if you take the entire circle into consideration - which, when it flows around the unit pole at the centre of a circle with a radius of 1 cm, exerts on the pole the force 2p dyn (equal to the weight of about 6.4 mg). This current is the absolute unit of current intensity, measured electro-magnetically (it was measured previously electro-statically). If we conduct an absolute unit of current through a silver Volta meter, it precipitates each second 11.18 mg silver. Industry does not employ the absolute unit, but its tenth part, You call 1/10 of the absolute unit of current intensity 1 Ampere.
This definition of current intensity fixes the hitherto indefinite proportionality factor in the law of Biot-Savart. A current of i absolute units of current intensity now exerts on each pole of magnetism m of a magnet needle at the centre of a circular current with radius r the force m·2pi/r dyn (Fig. 556). The distance of this force from the axis of rotation of the needle is q = cosa ·l/2, whence the turning moment, exerted by the magnetic force of the current on the needle is l·cosa ·2p i·m/r or, since we have set ml = M, cosa·2p i·M/r. Since the needle in the position N'S' acted upon by the two forces (current, Earth's magnetism) is at rest, the turning moments of the two forces are equal, that is, M·2p i/r·cosa = =MH·sina, whence i = H ·r/2p ·tana. In order to express i in absolute units, you must know the horizontal component H of the terrestrial magnetism*. If you change the current intensity, the angle a changes, which the magnet needle forms with the magnetic plane of the meridian, while everything else remains unchanged. Denoting the new current intensity by i1 and the new angle by a1, then i1=H·r/2p ·tana1, whence i/i1 = tana/tana1, that is, the current intensities are interrelated like the tangents of the deflection angles of the needle. The constant H ·r/2p is called the reduction factor of the instrument, shown in Fig. 553.
*In Central Europe, it is about 0.2 units of the magnetic field strength, that is, Earth's magnetic field attacks the unit pole there with a force of about 0.2 dyn.
There are many instruments, which act similarly to that in Fig. 553. An instrument for the accurate measurement of currents is called a galvanometer; if it is only for an indication of the existence of a current, it is called a galvanoscope. In order to raise the sensitivity of an instrument as much as it is possible, you employ instead of a single wire loop a narrow spool with many loops, when already a weak current generates a strong field (multiplicator). Moreover, you replace the needle, rotating on a point, by a small magnet, suspended by a thin thread, the exact angle of rotation of which is determined by a special procedure. The sensitivity is also raised by weakening Earth's magnetic force, which tends to turn the needle back into the magnetic meridian. You achieve this objective by means of a magnet, which you install near the instrument so that its lines of force neutralize largely those of Earth's field. You can also achieve this by providing the instrument an astatic** pair of needles instead of a simple needle; this is a pair of two rigidly connected needles, which are as equal as possible, which lie parallel over each other and have their poles in opposite directions Fig. (557) The directing force, exercised by Earth's field on such a pair, is small, because it attempts to turn the two magnets in opposite directions. If the two magnets were exactly alike, Earth's field would not exert a directing force. In reality, strengths of magnets always differ a little, so that a certain, although very weak adjustments occurs by Earth's field. There would be no sense in installing an astatic needle pair in a spool, since this could only act very weakly on such a weak needle. In fact, you may only place the one magnet of the pair into the spool (Fig. 558). The current field can then act on the total pole strength of the one magnet, while Earth's field acts only on the difference of the pole strengths of the two magnets .** stasis = stand
All instruments of this kind are sensitive to scattered magnetic fields such like thoseoriginating from power-lines, electric trains, etc., whence you construct galvanometers according to quite a different principle by suspending in the field of a very strong permanent magnet a rotatable spool. Fig. 559 shows the moving coil galvanometer for mirror reading of d'Arsonvale. N and S are the poles of a strong horseshoe magnet and C an iron cylinder in between them, through which the lines of force pass from N to S. The ring formed gap between the poles and the iron cylinder serves for insertion of the movable coil, which consists of a frame with wire around it in several windings. The current enters through the suspension wire A and leaves through the fine coiled spring wire M. If no current flows through the coil, it aligns, due to the force of the suspension, with the plane of the magnet. Passage of current generates in it a field, the lines of force of which are perpendicular to the plane of the windings. The interaction between this field and the horseshoe magnet causes rotation of the coil. The measurement of the angle of rotation is discussed later on. Since the spool is always located in a strong magnetic field, the readings are not appreciably disturbed either by Earth's field nor any external fields of unknown origin,
The same principle is employed in the current- and tension-meter of Weston, which is widely used for technical and scientific purposes (Fig. 560). Between the poles N and S of a fixed, strong, permanent magnet turns the solenoid P. A spiral spring F gives it a certain position relative to the lines of force of the field. If current passes through the coil, it turns until the diverting force of the magnet field balances the torsion of the spring. If the current is switched off, P returns to its old position. The coil is linked to a pointer, which moves over a scale, which is calibrated in Volt or Ampere, depending on whether it is used for measurements of tension or current. For practical reasons, the coil of the ampere meter is given a small, but that of the Volt meter a very large resistance.
Magnetism. a general property of matter. Paramagnetism. Diamagnetism, Permeability
Hitherto, we have only studied the forces in the neighbourhood of a magnet - its field; next, we will look at the magnet itself. Does the ability to be magnetised only belong to iron or also to other substances? Following Faraday 1846, substances can be divided into two classes, as the following experiment demonstrates: In Fig. 561, N and S are the poles of a strong magnet, the dashed segments its lines of force. Two equally shaped, small rods P out of chrome and D out of bismuth are suspended symmetrically between the poles and then released. Then the chrome bar P aligns with the lines of force (axially), the bismuth bar D perpendicularly to them (equatorially). Faraday calls substances which behave like P paramagnetic, the other - most of them - diamagnetic. For example, paramagnetic substances are iron, nickel, cobalt, chrome, palladium, platinum, osmium and many watery solution of metal salts, diamagnetic ones are bismuth, mercury, phosphor, sulphur, water, alcohol and many gases.
The lines of force demonstrate the difference between para- and dia-magnetic substances. Fig. 562 shows a magnetic field which is initially uniform, that is, its lines are parallel and equidistant. If you place a diamagnetic substance P into this field, the lines of force move closer together at the location, now filled by the paramagnetic body. If you place a diamagnetic substance D there, the lines move further apart. So we can say that the lines of force are deflected from their tracks on entering the chrome and the bismuth; they prefer, on the one hand, the path through the chrome to that through the air (Fig. 562 left), on the other hand, the path through the air to that through bismuth (Fig. 562 right). It seems as if, on the one hand, chrome lets lines of force pass more readily than air, on the other hand, air more readily than bismuth. Following Kelvin, this behaviour of substances is called their permeability (m)*. It measures how many more line of force pass through the space filled with a given substance than through a vacuum. We define: Paramagnetic substances are more permeable, diamagnetic ones less permeable than vacuum. Hence, whether a body will place itself axially or equatorially depends not only on its substance, but also on the magnetic conditions of its neighbourhood. No material is very strongly diamagnetic, bismuth most of all. Even the permeability of air relates to that of bismuth like 1:0.999,82.
* The magnetic counterpiece to the dielectric constant. Both have in vacuum the value 1. There is no electric analogue to diamagnetism. There do not exist dielectrics, the dielectric constant of which is < 1.
You write m = 1 + 4pk, a pure number, the unit of which belongs to air (more strictly, vacuum). The quantity k is the susceptibility; it measures the dependence of the magnetization (magnetic moment), m that of the magnetic induction on the strength of the field. Only the permeability of the ferro-magnetic materials depends on the field strength - and very strongly. The larger is the permeability of a kind of iron, the more useful it is for electro-technical purposes; it lies for ordinary substances between 2000 and 5000, it rises for special ones to 20000. The several thousand times larger permeability of iron is employed to protect sensitive equipment from the effects of magnetic fields (magnetic shielding). You surround them by iron covers, which conduct the lines of force through them and thus keep them away from the apparatus as, for example, the armoured galvanometer of Du Bois 1863-1918 and Heinrich Rubens 1865-1922, the extremely light magnetic system is enclosed within several iron shields. The formulae for permeability and susceptibility are:
|m = magnetic induction/magnetic field strength = B/H||k = strength of magnetization/magnetic field strength = I/H|
Molecular theory of magnetism
The preceding considerations have shown that ferro-magntism is just a special case. During an exploration of the mechanism. which appears to us as magnetism, we should be able to start from any material. We will start from the steel magnet, since is displays most strongly the characteristic phenomena, which concern us here; for the sake of clarity, consider a bar magnet (Fig. 448) and recall that every magnet has two poles. However often you break the magnet, each fraction is again a magnet. At the fracture, each has a pole, equally strong but with the opposite sign of the pole at the fracture of the neighbouring piece. It we fit the pieces again together in the order they were fractured, the restored magnet has the same properties as before. This suggests that also the smallest parts of the magnet, the molecules, are magnets (magnetic dipoles), whence also inside a magnet rules a force directed from pole to pole (internal field). If we compare the molecular magnets with compass needles, which, lined up pole to pole, the lines of force (Fig. 550a) appear as closed lines: In passing from one pole to the next, they run partly outside, partly inside the magnet.
Experience tells us: Contact with a magnet makes non-magnetic iron magnetic - mere contact, only weak, mutual sliding of touching pieces, much more strongly. Even just an approach to a magnet (the field of a magnet) makes iron magnetic; indeed, it generates a magnetic pole in its vicinity, a pole of opposite sign. The theory of Wilhelm Eduard Weber explains this as follows: Also non-magnetic iron consists of molecular magnets, but their axes are in all directions, whence the totality of the molecular magnets, that is, the piece of iron, is without polarity. An external magnetic effect directs the molecular magnets like magnet needles, all North poles in one direction, all South poles in the opposite direction, whence the piece of iron receives North- and South magnetism. If the magnetic effect is again removed, this induced magnetic state of the iron does not vanish completely. The magnetism which remains is called remanent, and the ability of the iron to hold on to it the coercive force. - James Alfred Ewing 1855-1935 has shown with the aid of many, closely spaced small magnet needles, that the most essential peculiarities of magnetism can be explained in this manner.
Molecular magnets were interpreted by Ampère 1820 as electro-magnets; he imagined every iron molecule to be a centre of an electric circuit. But where is the electromotoric force which maintains this current enduringly? And why does not there develop enduringly Joule's Heat in the circuit? We do not know about currents which lack a circuit's characteristic - electric resistance. This question has also not been resolved by the atomic theory of Bohr, which also leads to molecular circuits. However, molecular circuits are present in ferro-magnetic substances and have even been displayed experimentally by Samuel Jackson Barnett 1873-1956 1915, Einstein and Haas 1915 on the basis of considerations linked to the theory of tops.
As the temperature rises, magnetizability decreases steadily and vanishes almost completely at a certain temperature (transition temperature) named Curie point after its discoverer Curie - for iron at about 765ºC, for nickel at about 360ºC. This is in agreement with the theory of molecular magnets: The external magnetic field tends to arrange the magnetic axes of the molecules uniformly, but the heat tends towards their ideal disorder, approaches its goal at rising temperature and eventually overcomes the effect of the field. Hence there are beyond the transition temperature also ferro-magnetic substances which are only strongly para-magnetic. At the Curie point, the specific heat of a substance changes discontinuously.
The electron structure of atoms, following Lenard, Rutherford and Bohr, allows to explain that all substances of any kind of atom react to an external magnetic field: The reaction arises from the action of the field on the electrons, circulating the nucleus, since the motion of the electrons represents an Ampère molecular current, the magnetic action of which is equivalent to that of a bar magnet. To start with, the inevitable action of the magnetic field on the trajectories of the electrons is to deform them. However, the manner, in which the atom as a whole reacts to the field, depends on whether the electron paths in the atom are such that the resulting magnetic moments balance each other or combine into a total moment. If they balance, the atom is not magnetic and reacts to the external field dia-magnetically unless the external field cancels the mutual compensation and the atom obtains thereby an induced magnetic moment. If they combine into a total moment, the atom is para-magnetic and reacts correspondingly.
We have spoken here all the time about the magnetic properties of matter in general without referring to ferro-magnetism. It forms really only a special case and, indeed, probably in Crystal Physics and not in Atomic Physics. Present day research (in 1935!) attempts to understand magnetic properties as an atomic elementary process. This is the direction of Bohr's atomic theory, which identifies the circulating electrons with Ampère's molecular currents and in the process - this is most important! - leads to an atomic unit of magnetic moment, so to say, an elementary quantum of magnetic moment, the Magneton (a term due to Pierre Ernest Weiss 1865-1940 who, without starting like Bohr from theoretical considerations, was guided there earlier empirically). According to Bohr, the smallest atomic magnetic moment (Bohr's magneton) is generated by an electron, which circulates on a one quantum orbit about a positive nucleus; it is computed at m = 9.21·10-21, or related to the mole m·N (where N is Loschmidt's Number 6.06·1023), M = 5548 Gauß·cm. Weiss' magneton is about one fifth of this and contradicts Quantum Theory.
One of the most important predictions of the Quantum theory (Sommerfeld and Debye) concerns the behaviour of atomic magnets in a magnet field: What direction will the vector of the magnetic moment have relative to the direction of the lines of force. In a field, the axes of the magneton will not have an arbitrary direction, not all possible angles to the lines of force - that is, the directions will not be distributed arbitrarily - but they will form with them only certain angles, which depend on the moment of the magneton, but not on the strength of the field. If the atom has a moment of one magneton - the simplest case! - it must align according to the direction- quantumization theory so that the axis of the moment coincides with the direction of the external field. Two possibilities correspond to this mechanically unique specification of the direction with respect to the magnetic direction of the atom: The atom, conceived as an elementary magnet, can align in such a way that its magnetic direction, - the direction - is in the magnetic direction of the external field or is opposite to it (parallel and anti-parallel adjustment of the moment axis with respect to the external field). In 1935, the direction-quantumization theory could not be linked to the ferro-magnetic problem. Otto Stern 1888-1969 and Walter Gerlach have demonstrated visually with silver atoms spatial quantumization of directions and thereby proved the atomic theory of the magnetic moment by discovering as elementary quantum of the magnetic moment Bohr's magneton. According to the theory, the normal silver atom can align itself to the lines of force parallel or anti-parallel; in a non-homogenous field, all atoms split therefore into two parts - Stern and Gerlach have confirmed this experimentally for rays from silver atoms.
Ferro-magnetism. Magnetic hysteresis. Remanence and coercive forces
Iron, nickel and cobalt (and Heusler's alloys, for example, 55% copper, 30% manganese, 15% aluminium, which owe their ferro-magnetic character to the manganese) differ from all other para-magnetic bodies in that they themselves become real magnets under the influence of magnetic forces, that is, permanent magnets. However, their magnetism decreases gradually, for example, as the temperature rises or during certain mechanical treatments, but it never disappears totally. They form the ferro-magnetic group of metals (Fig. 564). Nickel and cobalt, viewed as magnets, are practically unimportant; the importance of iron is much greater in electro-technics.
Ferro-magnetism is decisively characterized by the hysteresis of Warburg 1881. A non-magnetic bar of iron (magnetic iron must be demagnetized initially in a solenoid), placed in a magnetic field, becomes magnetic at a certain level. If you remove it from the field, it retains some of it. If you then place it into a field of different intensity, apart from the strength of this field, its magnetization depends also on the amount of its earlier magnetization (magnetic prehistory of iron).
In order to generate the field, you employ a solenoid, push the bar to be investigated into it, magnetize it by turning on the current and gradually strengthening it, then revert the direction of the current and demagnetize it again. (The field is more or less uniform, its strength H can be computed from the dimensions of the solenoid and the current intensity.) The curve 0A (Fig. 565) shows the growth of the intensity of the pole of the iron bar with that of the field. Abscissa is the field intensity H acting on the bar, ordinate the corresponding intensity of the pole B of the bar. Prior to switching on the current - for H = 0 (starting point) - the bar has the pole strength B = 0. As you strengthen the current in the solenoid, the field intensity H grows and more so the pole strength B of the bar. When H = 1, the pole strength B has reached the value 3000 (the point x). Approximately at H = 8, the bar is saturated, its pole strength does not grow further, even if you strengthen the field. If we again weaken the field, also the pole strength of the bar drops, but not so that at the former values of the field strength, for example, H = 1, the former value of the pole strength (B = 3000) again returns. There arises a new curve AC: Now, throughout, there correspond larger values of B to the same values of H. Thus, B = 5800 corresponds to H = 1. When H returns to 0, B is still 5000 (the point C); the residual magnetism is that strong and its strength changes with the kind of iron.
Fig. 564 shows: At a given field intensity H, while still being magnetized, the bar has a smaller pole strength (values of B on the branch ascending to A) than while it is being demagnetized. (values of B on the branch descending from A to D. The difference between the two values of B becomes the smaller the closer is the field strength H to the saturation value and vanishes at this value. If you now send current through the solenoid in the opposite direction, whereby you invert the direction of the magnetic field (H = -1, etc.), the pole strength of the magnet decreases more and more; at about H=-2, it vanishes (the point D), that is, the iron is again not magnetic. If you go to larger negative values of H, the iron inverts its poles. Finally, at H = - 8, you reach again saturation, but with interchanged poles (the point A'). If you now weaken the field again, we have the same experiences as in the upper part of the curve. The pole strength does not move along A'D, but returns along the curve A'C'D', which is initially much flatter. At the field strength H = 0, there remains again a very considerable magnetism (the point C'); only the transition to positive field strengths raises it (the point D'). Hence we can say that iron always tends to hold on to the magnetic state which it has obtained.It resists to some extent the change which the change of the field tries to impose on it. The changes of the pole strength always lag behind those of the field strength. This is the reason why this behaviour is called magnetic hysteresis (Greek: usterew = I stay back). Ferro-magnetic substances differ fundamentally from all other substances by their hysteresis. Only iron, cobalt and nickel as well as certain of their alloys display hysteresis.
The distance 0C, that is, the pole strength which the bar has, although the field strength is zero, displays its residual magnetism. The distance 0D, the field strength required to make B = 0, that is, to make it again unmagnetic after it was previously magnetic in the opposite direction, yields the measure for the force with which it holds on to its acquired magnetism (coercive force). Different kinds of iron differ greatly by their residual magnetism and coercive force and therefore, for given technical applications, they must be selected on the basis of their recorded magnetization curves. Soft iron (Swedish charcoal iron) has the largest residual magnetism, but the smallest coercive force. If it is magnetized, it is soon saturated; if the magnetizing force is again removed, it retains much magnetism; however, the smallest demagnetizing force is sufficient to rid it of magnetism. In contrast, steel retains in the same situation only very little magnetism, but holds on to the little amount, which only a very large, oppositely directed magnetizing force can remove. Hence you can readily make steel into a permanent magnet by means of a sufficiently large magnetizing force; on the other hand, soft iron is useless for this purpose.
Thanks to its property of being very quickly magnetized and demagnetized, iron is the soul of electrotechnics. In certain apparatus and machines, the magnetic circular process occurs every second 50 to 60 times, almost according to Fig. 565 . This to and fro magnetization is opposed by iron by magnetic friction, which leads to hysteresis and is overcome by work. However, this work is wasted, is converted into heat and heats up the iron for no purpose. You can compute from the area in the hysteresis loop the wasted energy. For one ton of soft iron, submitted to 100 magnetization cycles each second, this work is 17 - 18 horse powers (James Alfred Ewing 1855-1935).
continue step back | http://mpec.sc.mahidol.ac.th/radok/physmath/PHYSICS/k12.htm | 13 |
111 | Introduction to Silviculture and Silvicultural System
Forest is defined as ‘an area set aside for the production of timber and other forest product, or maintained under woody vegetation for certain indirect benefits which it provides.’ This is the general definition of the term and lays emphasis on the direct and indirect benefits that the forests provide.
But in ecology, it is defined as ‘a plant community predominantly of trees and other woody vegetation, usually with a closed canopy '. This definition describes the forest as a kind of vegetation in which trees constitute the predominant part, to distinguish it from vegetation in which grasses or shrubs may be predominant, and are fairly dense so that their crowns touch each other.
In legal terminology, forest is defined as ‘an area of land proclaimed to be forest under a forest law’. This definition describes the forest not as a biological unit but as property having an owner and with rights of certain people. This definition is useful only in law courts, where cases pertaining to offences committed are tried.
CLASSIFICATION OF FORESTS
Forests can be classified on the basis of:
( i ) Method of regeneration;
( ii ) Age;
( iii ) Composition ;
( iv ) Objects of management ;
( v ) Ownership and legal status and
( vi ) Growing stock
(vii) Climatic and edaphic factors, geographical location and condition
( i ) Classification based on method of regeneration - Forest can be regenerated either from seed or from vegetative parts; those which are regenerated from seed are called high forests and those regenerated by some vegetative method are called coppice forests.
( ii ) Classification based on age - Even in plantations raised in a particular year, all the trees are not of the same year because casualties are replaced in the second and third years. Thus forests having all trees of the same age are usually not found. Therefore forests are classified on the basis of age into even aged or regular forest and uneven aged or irregular forest.
( iii ) Classification based on composition - A forest may have only one species or more than one species. On the basis of the number of species present, the forest is classified into pure or mixed forest. Pure forest is defined as a forest ‘composed of almost entirely of one species, usually to the extent of not less than 80%’. It is also called pure crop or pure stand. Mixed forest, on the other hand, is defined as ‘a forest composed of trees of two or more species intermingled in the same canopy; in practice, and by convention, at least 20% of the canopy must consist of species other than the principal one. The species composing the mixture may be distinguished as principal, accessory and auxiliary’.
Principal species is defined as ‘ the species first in importance in a mixed stand either by frequency, volume or silvicultural value’ or ‘the species to which the silviculture of a mixed forest is primarily directed’. Accessory species is defined as ‘a useful species of less value than the principal species, which assists in the growth of the latter and influences to a smaller degree the method of treatment'. Auxiliary species is defined as ‘a species of inferior quality or size, of relatively little silvicultural value or importance, associated with the principal species’. It is also referred to as secondary species or subsidiary species.
( iv ) Classification based on objects of management - On the basis of objects of management, forests are classified as production forest, protection forest, recreational forest, etc. Production forest is ‘a forest managed primarily for its produce'. Protection forest is defined as 'an area wholly or partly covered with woody growth, managed primarily to regulate stream flow, prevent erosion, hold shifting sand. Recreational forest is a forest, which is managed only to meet the recreational needs of the urban and rural population.
(v) Classification based on ownership and legal status - On the basis of ownership, forests are classified into National forest, Private forest. National forest is a forest owned by state’ while Private forest is a forest owned by an individual person. National forests are further divided as Government managed forest, Community forest, Leasehold forest, and Religious forest.
Government managed forest are managed by the government. Community forest means a national forest handed over to a community forest users’ group for its development, conservation and utilization for collective benefit. Leasehold forest means a national forest handed over as a leasehold forest to any institution established under current law, industry based on forest products or communities. Religious forest means a national forest handed over to any religious body, group or community for its development, conservation and utilization.
(vi) Classification on the basis of growing stock - On the basis of growing stock, the forests are classified into normal and abnormal forest. Normal forest is defined as ‘a forest which for a given site and given objects of management, is ideally constituted as regards growing stock, age class distribution, and increment, and from which the annual or periodic removal of produce equal to the increment can be continued indefinitely without endangering future yields’. Abnormal forest is ‘a forest in which, as compared to an acceptable standard, the quantity of material in the growing stock is in deficit or in excess of in which the relative proportions of the age or size classes are defective’.
Growing stock is the sum (by number or volume) of all the trees growing in the forest or a specified part of it.
(vii) Climatic and edaphic factors, geographical location and condition
Forest types based on Climatic and edaphic factors, geographical location and condition are as following.
Tropical forest – below 1000m e.g., Shorea robusta forest, Acacia catechu-Dalbergia sissoo forest.
Sub-tropical broad-leaved forest – 1,000-2,000m e.g., Shima-Castonopsis forest, Alnus nepalensis forest
Sub-tropical pine forest – 1,000 – 2,000 m e.g., Pinus roxburghii forest
Lower temperate broad-leaved forest – 1700-2700m e.g., Quercus floribunda and Q. lamellose forest, Castonopsis tribuloides forest.
Lower temperate mixed broad-leaved forest – 1700-2700m e.g. forest of Lauraceae family
Upper temperate broad-leaved forest – 2200-3000m e.g. Quercus semicarpifolia forest
Upper temperate mixed broad-leaved forest – 2500-3500m e.g., Acer and Rhododendron forest
Temperate conifer forest - 2000-3000m e.g., Pinus wallichiana, Cedrus deodara, Tsuga demosa and Abies pindrow forests.
Sub-alpine forest – 3000-4100m e.g., Abies spectabilis, Betula utilis forest.
Alpine scrub- above 4100m e.g., Juniperus-Rhododendron forest, Hippophae tibetana forest.
Growth and Development of Trees
The tree starts its life as a small seedling, which grows by increase in length and diameter of its shoot and root. As the shoot grows upwards, it develops branches and foliage. The root grows downwards and develops lateral roots and its branches. Thus the seedling grows not only by increase in size of its shoot and root but also formation of new organs. The increase in size is commonly referred to as growth or increment and the formation of new organs is referred to as development. Thus both growth and development are responsible for the change that takes place in a small seedling growing into a tree.
Various stages of growth and development of a plant are designated as follows:
Seedling- Seedling is a plant grown from a seed till it attains a height of about 1 meter, i.e., before it reaches the sapling stage.
Sapling- Sapling is defined as a young tree from the time when it reaches about one meter (3feet) in height till the lower branches begin to fall. A sapling is characterized by the absence of dead bark and its vigorous height grows.
Pole –Pole is defined as a young tree from the time when the lower branches begin to fall off to the time when the rate of height growth begins to slow down and crown expansion becomes marked.
Tree- Tree is the stage of the growth beyond the pole stage when the rate of height growth begins to slow down and crown expansion becomes marked.
Tree is essentially a plant. Plants may be classified into the following categories:
I) Herb- it is defined a plant whose stem is always green and tender and height is usually not more than one meter.
II) Shrub – it is defined as a woody perennial plant differing from a perennial herb in its persistent and woody stem and less definitely from a tree in its low stature and its habit of branching from the base. A shrub is usually not more than 6 meters in height.
III) Tree- it is defined as large woody perennial plant having a single well defined stem (bole or trunk) and a more or less definite crown. A tree is usually more than 6 meters height, which can, according to species, be up to 127 meters.
Standard tree classification adopted for regular crops is follows:
· Dominant trees (D): All trees, which form the upper most leaf canopy and have their leading shoots free. These may be sub-divided according to the position and relative freedom of their crown into;
1. Predominant trees: comprising of all the tallest trees, which determine the general top level of the canopy and are free from vertical composition.
2. Co dominant trees: comprising of the rest of the dominants falling short of and averaging about 5/6 of the average height of predominant trees.
· Dominated trees (d): Trees which do not form part of the upper most leaf canopy, but the leading shoots of which are not definitely over topped by the neighboring trees. Their height is about ¾ that of the tallest trees.
· Suppressed trees (s): Trees, which reach only about ½ to 5/8 of the height of the best trees, with leading shoots, definitely over topped by their neighbors or at least shaded on all sides by them.
· Dead and moribund trees (m): This class also includes bent over or badly leaning trees usually of the whip type.
· Diseased trees (k): Trees, which are infected with parasites to such an extent that their growth is seriously affected or they are a danger to their neighbors.
Dense canopy – There is strong competition between the crowns of the trees.
Normal canopy – Crowns slightly touch each other.
Light canopy – Crowns don’t touch each other but there is not enough space for an additional
Open canopy- There is sufficient space for an additional tree between two crowns.
Gaps – There is enough space for several trees between two crowns.
Stem-The principal axis of plant from which buds and shoots are developed. Stem, trunk and bole are synonymous.
Crown-Upper branchy part of a tree above the bole.
Canopy – The cover of branches and foliage formed by the crowns of trees in a forest.
Taproot – Primary root formed by direct prolongation of the radical of the embryo.
Lateral roots – Arise from the taproot and spread laterally.
Adventitious Roots – Produced from the parts of the plants other than the radicle or its sub-division.
Evergreen plant- Perennial plant that is never entirely without green foliage, the old leaves persisting until a new set has appeared.
Deciduous plant - Perennial plant, which normally remains leafless for sometime during the year.
Taper - The decrease in diameter of the stem of a tree or of a log from the base upward.
Conifer – A tree belonging to the order Coniferales of the group Gymnosperm, bearing cones and generally needle shaped or scale like leaves, usually evergreen and producing timber known as softwoods.
Broad-leaved – A tree belonging to the group Dicotyledons, and producing timber usually known as hardwood.
Rotation – The planned number of years between the formation or regeneration of a crop and its final felling.
Indigenous – Native to a specified area or region, not introduced.
Exotic – Not native to the area in question, introduced from outside.
Growing stock - It is the sum (by number or volume) of all the trees growing in the forest or a specified part of it.
Increment – The increase in girth, diameter, basal area, height, volume, quality, price or value of individual tree or crops during a given period.
Forestry Statistics of Nepal
Total land area = 14,718,100 Ha
Forest area = 4,268,800 Ha
Forest % of total land area = 29
Shrub area = 1,559,200 Ha
Shrub % of total land area = 10.6
Forest and shrub total = 39.6%
Total volume = 387.5 million cubic meter
Mean stem volume = 178 cubic meter/Ha
Average number of stems/Ha = 408
Main tree species in terms of proportion of total stem volume = Sal (28.2% of total volume)
From 1978/79 to 1994, rate of decrease in forest area = 1.7%
(Source: Forest Resources of Nepal (1987-98), Department of Forest Research and Survey, Nepal, 1999)
Silviculture has been defined variously by various authors. According to Toumey and Korstian, ‘silviculture is that branch of forestry which deals with the establishment, development, care, and reproduction of stands of tmber’. Indian Forest and Forest Products Terminology (IFFPT), published by the Forest Research Institute and Colleges, Dehradun, defines silviculture as, ‘the art and science of cultivating forest crops’. According to Champion and Seth, “the term silviculture in English commonly refers only to certain aspects of the theory and practice of raising forest crops’.
Silviculture is the science and art of growing and tending forest crops. More particularly, the term silviculture means the theory and practice of controlling the establishment, composition, character and growth of forest stands to satisfy specific objectives (Broun; Kostler; Ford-Robertson; Smith, Daniel et al). According to Webster’s Dictionary, 'silviculture is a branch of forestry dealing with development and care of forest’.
According to the Society of American Foresters, 'silviculture is the art of producing and tending forest stand by applying scientifically acquired knowledge to control forest stand establishment, composition and growth; applying different treatments to make forests more productive and more useful to a land owner and integrating biologic and economic concepts to devise and carry out treatments most appropriate in satisfying the objectives of an owner'.
Though from the above definitions, there appears to be some diversity in views about the scope of silviculture, yet, in a broad sense, silviculture may be taken to include both silvics and its practical application.
Silvics deals with the biologic characteristics of individual trees and communities of them. It studies how trees grow and reproduce, as well as the ways that the physical environment influences their physiology (Ford-Robertson; smith Daniel et al;). According to Indian Forest and Forest Products Terminology (IFFPT), “silvics is the study of life history and general characteristics of forest trees and crops with particular reference to environmental factors, as the basis for the practice of silviculture’. Thus silvics implies the study of trees and forests as biological units, the laws of their growth and development and the effect of environment on them. It explains the natural laws of their growth and development and their behavior in a given set of environmental conditions. Silvics also includes the study of ways that the physical environment tempers the make up and character of a forest community, and the interaction of biologic components in those communities.
The knowledge gathered in silvics is applied to the production and care of forest crops. Thus the practice of silviculture is applied silvics. It deals with the procedure of obtaining natural regeneration under the various silvicultural systems, artificial regeneration of various species and methods of tending young crops, whether natural or artificial, to help them to grow into forests of quality timbers and great economic value.
Silviculture is not a purely biological science, which has no relation with economics. The foresters raise the forests and tend them for the service of the people, but this is not to be done at a prohibitive cost. If forests are to be grown for the public good, the methods of raising and tending them, developed on the basis of knowledge of silvics, will have to be modified in practice by economic considerations.
There have been foresters who have advocated that, in case of doubt, the trees should be approached for answer. Even today, the local flora is regarded to be the best guide about the suitability of a species for a particular site. This is so because in nature there are so many complex factors at play that it is only the vegetation that can give an indication of the possible solution. But in order to understand the indication of the vegetation or answer of the trees, it is necessary for the forester to be conversant with their language and proficiency in this art comes by close continuous observation and experience.
OBJECTS OF STUDY OF SILVICULTURE
The forests are as old as the universe; naturally they must have been growing and renewing themselves. It is a well-known fact that forest preceded civilization in every part to the world. Management of the forests by the Forest Department is a very recent phenomenon. Even today, there are virgin forests. The question naturally arises as to what use is the study and practice of silviculture and why should a forester take upon himself the work that the nature had been doing all these years? The answer to this question is purely economic. The object of study and practice of silviculture is to produce more useful and valuable forests to meet our multifarious requirements, than nature would do and that too, in a shorter time. The objects with which nature produces vegetation are not identical with that of man. The former produces a jungle the latter a forest. The study of silviculture helps in:
(1) Production of species of economic value - In the virgin forests, many of the species are generally neither very valuable nor useful. Therefore, the production of timber of species of economic value per unit area is low. If the forests have to produce timber of industrial and economic importance, it is necessary to study and practice silviculture so that we can produce only the desired species.
(2) Production of larger volume per unit area- In the virgin forests, the crop is generally either very dense or very open. Both these extremes are unsuitable for quantitative production. If the crop is very dense, the growth of the individual trees is adversely affected resulting in lesser timber volume production per unit area. On the other hand, if the crop were very open, the number of trees, and consequently volume, per unit area would be less. Besides this, a large number of trees die out as a result of competition before reaching maturity. In the unmanaged forest, they are not utilized and that volume of timber is lost. The study and practice of silviculture helps in raising sufficient trees per unit area right from the beginning to fully utilize the soil and as they grow up, gradually reduce their number so that the requirement of light and food of the remaining trees is met. In this way, while by raising sufficient number of trees, the volume production per unit area is increased, the utilization of the excess trees as the crop grows in age, prevents the loss and consequently further increases that volume.
(3) Production of quality timber - In the unmanaged forests, because of intense competition, a large number of trees become crooked, malformed, diseased and defective. This results in the deterioration of the quality of timber produced. If the production of quality timber is to be ensured, knowledge of silviculture will be essential so that the trees can be grown in disease-free condition without adverse competition.
(4) Reduction of rotation - In the virgin forests because of intense competition in the dense parts, the rate of growth of the individual tree is retarded with the result that it takes longer time to reach the size at which it can be exploited. This increases the cost of production of timber. With the knowledge and practical application of silviculture, the density of the crop can be properly regulated and consequently the rate of growth increased and rotation reduced.
Rotation is the planned number of years between the formation or regeneration of a crop and its final felling. In other words, average age at which a tree is considered mature for felling.
(5) Raising forests in blank areas - In nature, a large number of areas, potentially suitable for tree growth, occasionally remain blank due to certain adverse factors inhibiting growth of trees. Silvicultural skills and techniques help in raising forests in such areas.
(6) Creation of man-made forests in place of natural forests - There may be areas in natural forests which may not regenerate or reproduce themselves naturally or where natural regeneration may be extremely slow and uncertain. In such areas, it becomes necessary for the forester to take up the work of nature in his/her hand and raise man-made forests in such areas. Success in this endeavor can be achieved only when he/she has a good knowledge of the science and art of raising forest crops artificially.
(7) Introduction of exotics - The indigenous species may not be able to meet the commercial and/or industrial demands. In such areas, efforts are made to introduce exotics, which can grow in that particular locality and can supply the timber required by industries etc in time.
FORESTRY, ITS SCOPE AND RELATIONSHIP
Forestry is defined as ‘the theory and practice of all that constitutes the creation, conservation and scientific management of forests and the utilization of their resources.’ It is an applied science, which is concerned with not only the raising or cultivation of forest crops but their protection, perpetuation, mensuration, management, valuation and finance as well as utilization of the forest products.
Forestry encompasses the science, business, art and practice of purposefully organizing and managing forest resources to provide continuing benefits for people. Historically, forestry throughout the world has focused primarily upon organizing and managing lands to grow and utilize wood products and other commodities that forest ecosystem can provide in perpetuity.
RELATION OF SILVICULTURE WITH FORESTRY AND ITS BRANCHES
1. SILVICULTURE AND FOREST PROTECTION
Forest protection is defined as that branch of forestry which is concerned with ‘the activities directed towards the prevention and control of damage to forests by man, animals, fire, insects, disease or other injurious and destructive agencies. Knowledge of the injuries caused to forests by the local human and animal population, both domestic and wild, insects, fungi and other adverse climatic factors and the preventive and remedial measures to counteract them, is essential for effective protection of the forests. Thus while silviculture is concerned with the raising of forest crop, forest protection is concerned with its protection against various sources of damage.
2. SILVICULTURE AND FOREST MENSURATION
Forest mensuration is defined as 'that branch of forestry which deals with the determination of dimensions, form, volume, age and increment of logs, single trees, stands or whole woods.’ Thus while silviculture deals with raising of forest crop, Forest Mensuration deals with measurement of diameter and heights of crop so produced, calculation of its volume, age etc for sale and research to decide the best treatment to be given to the crop while it is being raised .
3. SILVICULTURE AND FOREST UTILIZATION
Forest utilization is defined as ‘the branch of forestry concerned with the harvesting, conversion, disposal and use of the forest produce’. Thus, while silviculture is concerned with the cultivation of forest crops, forest utilization is concerned with the harvesting and disposal of crops so produced.
4. SILVICULTURE AND FOREST ECONOMICS
Forest Economics is defined as those aspects of forestry that deal with the forest as a productive asset, subject to economic laws’. Thus while silviculture is concerned with the cultivation of forest crop, forest economics works out the cost of production including rental of land and compound interest on capital spent in raising the crop, and compares it with the sale proceeds to decide whether raising of the crop is economically profitable or not. It is also the function of the Forest Economist to compare the cost of production of a particular crop by different methods and then decide the most profitable method of raising that crop.
5. SILVICULTURE AND FOREST MANAGEMENT
Forest management has been defined as ‘the practical application of the scientific, technical and economic principles of forestry.’ Thus while silviculture deals with the cultivation of forest crop, forest management manages that crop according to the dictates of the forest policy. Silviculture deals with the techniques and operations, which result in the development of a forest. Forest Management prescribes the time and place where the silvicultural techniques and operations should be carried out so that the objects of management are achieved.
6. SILVICULTURE AND FORESTRY
From the definition of forestry given earlier, it is clear that forestry has a very wide scope and silviculture is only one of its branches. It has the same relation with forestry as agronomy has with agriculture. While agronomy and silviculure deal with cultivation of crops, agriculture and forestry deal not only with the cultivation of crops but also with their protection, management, mensuration, marketing etc. In short, forestry is an applied science, which has many branches. It may be compared to a wheel. Silviculture is the hub of the wheel; it is neither the whole wheel nor is it the only essential part. But, just as a cart wheel composed of several sections is supported on its hub (central part); similarly forestry and its other branches are supported on silviculture without which there would be neither forestry not its branches.
Figure - Relation of Silviculture with Forestry and its branches.
Silviculture is ‘the art and science of cultivating forest crops’. It deals, in a general way, with the natural laws of growth and development of trees and forests, the effect of environment on them, techniques of regenerating them naturally or artificially and the methods of tending them. Since the techniques of regenerating forest crops vary with types and sub-types of forests, and physical conditions in which they exist, it becomes necessary to identify different methods or techniques for different sub-types in different localities. These methods or techniques are called silvicultural systems. Thus a silvicultural system may be defined as a method of silvicultural procedure worked out in accordance with accepted sets of silvicultural principles, by which crops constituting forests are tended, harvested and replaced by new crops of distinctive forms. In other words, it is a planned silvicultural treatment, which is applied to a forest crop, throughout its life, so that it assumes a distinctive form. It begins with regeneration felling and includes adoption of some suitable method of regeneration and tending of the new crop, not only in its early stages but also throughout its life.
As already stated, a silvicultural system is a silvicultural procedure adopted for renewal of a forest crop in a given set of conditions. Thus a silvicultural system is a specialized tool or technique for achieving the objects of forest management
Silvicultural System as a Plan for Management - Any review of silvicultural must recognize at least four basic concepts or premises that underlay its practice.
1. Foresters can change the character of tree community by manipulating its composition and density, and often to better serve the special interests of a landowner.
2. They can affect the results by applying different kinds and intensities of treatments at varying stages of stand or age class development, and arranging them in a unique sequence.
3. The treatments must fit characteristics of the species of interest and the physical site conditions within the stand under management, and prove ecologically acceptable.
4. To adequately control stand or age class establishment, composition, and development (growth), silviculturists must plan both an appropriate intensity and an optimal time for applying each treatment to insure the sought after effects.
Conceptually, both the type of manipulation and time of its use influence how a stand or age class will develop afterward. For that reason, timing and sequence may prove as important as the actual kind of treatment applied at any juncture. Time in this sense means the stage of development, rather than season of year or even the chronological age of the trees.
To integrate these concerns and guarantee the appropriate timing, sequence and kind of treatments that will produce the desired outcomes, foresters need a conceptual framework for their management. The silvicultural system serves this role. It describes the long-term plan for managing an individual stand to sustain a particular set of values of interest. It reflects the silviculturist’s concept of how to control, facilitate, protect and salvage within a stand.
Conceptually, foresters develop a unique silvicultural system for each forest stand. Yet all silvicultural systems include three basic component treatments or functions.
( 1 ) Regeneration
( 2 ) Tending
( 3 ) Harvest
All systems for all stands will give due attention to all components in some form or another, and arrange them to fit the needs of a particular stand and a particular ownership. A silvicultural system describes the means for effectively regenerating, tending, and harvesting in a timely and economically viable manner.
Regeneration Phase Component Treatment
Harvest Clear felling Method
Shelter wood Method
Fig 1. Components and Character of Sivicultural System (After Nyland et. al. 1983)
Harvest Fig 3.
Fig2, Fig 3. Alternate views of the sivicultural system emphasizing its continuous nature
and the interdependence of its component treatments
Figure 1, indicates that silvicultural systems commonly incorporate harvesting techniques to implement both the intermediate treatments and the methods for regenerating a new age class at appropriate time. Figure 2, highlights the cyclic nature of silvicultural systems. Normally, events move from regeneration of an age class to the tending of it during intermediate ages, and finally to its regeneration again at maturity. Figure 3, suggests the interdependence of these components and the necessary linkages of different parts to ensure a system approach to management. Skipping one part makes the program incomplete, just as taking away one leg of a tripod lets it collapse. In fact, an appropriate silvicultural system should,
Optimize the yields - capitalize upon the full productive potential of a site to serve a landowner’s interest;
Improve the quality - provide the kind of stand and trees best suited to a landowner’s needs, and to the fullest extent possible;
Shorten the investment period - bring a crop or tree community to the desired condition or stage of usefulness without needless delay;
Contain the costs - minimize the investments to optimize those sought-after values; and
Sustain ecosystem health and productivity - limit practices to those that appear ecologically and biologically appropriate.
CLASSIFICATION of SILVICULTURAL SYSTEM
Silvicultural systems have been classified in a variety of ways but the most commonly used classification is based primarily on the mode of regeneration, and this is further classified according to the pattern of felling carried out in the crop. According to the mode of regeneration, silvicultural systems are classified into following two main categories or groups:
( I ) High Forest System and
( II ) Coppice System
( I ) HIGH FOREST SYSTEMS
High Forest Systems are those silvicultural systems in which the regeneration is normally of seedling origin, either natural or artificial (or a combination of both) and where the rotation is generally long. These are further classified on the basis of pattern of felling, which, in turn, affects the concentration, or diffusion of regeneration and the form or character of the new crop so produced.
(1) System of concentrated regeneration - These are those silvicultural systems in which regeneration fellings¹ are for the time being concentrated on part of the felling series. These are further subdivided into two main categories:
(A) Clear-felling systems
(B) Shelter wood systems
(A) Clear-felling systems - Clear-felling systems are those silvicultural systems in which the mature crop is removed in one operation to be regenerated, most frequently, artificially but sometimes naturally also. The area to be clear-felled each year in uniformly productive sites is 1/n of the total area allotted to this system, where n is the number of years in the rotation, and is usually referred to as the annual coupe. But where due to variations in site and crop, the yield per unit area is likely to vary considerably from year to year, the coupes to be felled each year are made equi-productive, i.e., on poorer sites with inferior patchy crop larger areas are clear-felled so that the annual yield is nearly the same.
This system should not be applied in the areas, which are geologically unstable.
This system is most suited to light demanders.
It is the only system by which forests composed of slow growing species of little economic value can be replaced by new crops of fast growing and valuable species for industrial and other uses.
It is the simplest of all high forest systems as it does not require a high degree of skill in carrying out its marking. Therefore it is easy to practice.
As the soil remains exposed till the canopy closes, there is great danger of deterioration of soil and the possibility of soil erosion increases. So, this system should not be applied in sloppy and erosion prone areas.
(B) Shelter wood system - Shelter wood systems are those silvicultural systems in which the mature crop is removed in a series of operations, the first of which is the seeding felling and the last is the final felling. This method is used where an inadequate seed supply or a sharp change of environmental conditions might prevent success following clear cutting. Shelter wood method also allows them to temper visual characteristics within regenerating stands, and to maintain essential habitat conditions for selected animals.
This system differs form clear felling in three principal respects.
(1) Relatively low density residual of vigorous seed-bearing trees of good phenotypic character as seed source, and for protective cover.
(2) The residual overstay trees provides sufficient canopy cover to mitigate sensitive environmental conditions.
(3) Reserve trees are removed once new generation of adequate size and density forms, and no longer needs protection.
(2) Systems of diffused regeneration - These are those silvicultural systems in which regeneration fellings are distributed over the whole felling series. These are further subdivided into two main categories:
( A ) Selection System
( B ) The Group Selection System
(A) The Selection System
The Selection System is defined as a silvicultural system in which felling and regeneration are distributed over the whole of the area and the resultant crop is so uneven aged that trees of all ages are found mixed together over every part of the area. Thus, the Selection System differs from other systems mainly in the following respects:
(i) The felling and regeneration in the systems so far described are concentrated, i.e., these are confined to a certain part of the whole area, whereas in selection system, these are distributed over the whole area.
(ii) The resultant crop in all the systems so far described, is even aged and the constituent age-classes are found in different areas, whereas, in the Selection System it is completely uneven aged so much so that all age classes are mixed together on every unit of area.
(iii) In the systems so far described, the regeneration operations are carried out only during a part of the life of the crop, after which only thinning are done to improve the growth and form of the remaining trees, whereas in Selection System, regeneration operations are carried out throughout the life of the crop and thinning are done simultaneously for improving the growth and form of trees.
This system is suitable where slope is steep and terrain is broken to serve the soil conservation and landslide protection,
It is favorable where continuous ground cover is necessary such as catchments areas and erosion prone areas.
In the areas where the product of particular size and species is in demand.
Sensitive shade bearer species are more suitable to work under this system.
It is suitable where objective of management is promotion of bio-diversity as regeneration of diverse species is possible under this system.
( B ) The Group Selection System
The Group Selection System is defined as a Selection System in which trees are felled in small groups and not as scattered single trees of the typical Selection System. These felling may be distributed over the whole area if it is small, otherwise these are carried out only on a part of the whole forest each year under a felling cycle. Consequently, the regeneration is also spread over the whole area.
(3) The Accessory System-
The term Accessory Systems refers to those high forest systems, which originate from other even-aged systems by modification of technique, resulting in an irregular or two-storeyed high forest. The following accessory systems are commonly met with:
(A) Two-storeyed high forest system
(B) High forest with reserve system
( A ) Two-storeyed High Forest System
Two-storeyed High Forest System is an accessory silvicultural system which results in the formation of a two-storeyed forest, i.e., a crop of trees in which the canopy can be differentiated into two strata, in each of which the dominant species is usually different. The crop in each storey is approximately even-aged, and is of seedling origin.
( B ) High Forest- With Reserves System
High Forest-With-Reserves System is an accessory silvicultural system in which selected trees of the crop being regenerated are retained for part or whole of the second rotation, in order to produce large- sized timber.
( II ) COPPICE SYSTEM
Those silvicultural systems in which the new crop originates mainly from stool coppice and where the rotation of the coppice is short. Following are the different coppice systems.
( 1 ) The Simple Coppice system
It is defined as a silvicultural system based on stool coppice, in which the old crop is clear felled completely with no reservation for shelter wood, or any other purpose. The crop produced under this system is even-aged.
The best season for coppicing is a little before the growth starts in spring because, at this time, there is a large reserve of food material in roots, which is utilized by the coppice shoots. The stump should neither be too low nor high. Thus, the stumps are usually kept 15 to 25 cm. The trees are felled in such a away that the stumps does not split, the bark does not get detached from the wood and that it slopes slightly in one direction so that rain water may quickly drain off.
o This system is suitable for areas where the factors of locality are low and incapable of producing lager-sized timber.
o This system is applicable only to areas where there may be demand for fuel, poles and small sized timber only.
o It is very simple in application.
( 2 ) The coppice of Two Rotations System :
It is the modification of the simple coppice system in which at the end of the first rotation of coppice, a few, selected poles are left scattered singly over the coupe in the second rotation to attain bigger height. The main objective of the system is to produce some large-sized timber in addition to the poles of ordinary size.
( 3 ) The Shelter wood Coppice System :
In this system, even in the first clear felling, some shelter wood (125 to 150 per hectare) is retained for frost protection. The shelter wood is removed after the coppice shoots are fully established. This system is applicable in following special circumstances:
o Where frost is of common occurrence;
o Where the locality is good;
o Where in addition to small-sized timber, there is demand for some large-sized timber also; and
o Where a rotation longer than ordinary coppice rotation can be adopted.
( 4 ) The Coppice with Standard System :
In this system, an over wood of standards, usually of seedling origin and composed of trees of various ages, is kept over coppice for periods which may be multiples of coppice rotation and as a permanent feature of the crop throughout its life. The standards are kept in this system with the following objectives:
o Supply of large-sized timber;
o Protection against frost;
o Enrichment of coppice; and
o Increase in revenue
( 5 ) Coppice Selection System :
It is a silvicultural system is which felling is carried out on the principles of selection system but regeneration is obtained by coppice. In order to carry out fellings on principles of selection, an exploitable girth or diameter is fixed according to the size of material required and a felling cycle¹ is decided. The character of the crop produced under this system is uneven aged. This system has been applied in the Khair (Acacia catechu) forest.
( 6 ) The Pollard System : It is the simple coppice system but removal of exploitable material is done by periodical pollarding².
( 7 ) The Coppice With Reserve System :
Silvicultural system in which felling is done only in suitable areas likely to benefit, after reserving all financially immature growth of principal as well as other valuable miscellaneous species for protective reasons. This system is applicable with advantage only under the following conditions:
o When the crop varies greatly in density, composition and quality and proportion of the valuable species is low;
o When most of the species are good coppicers and coppicing power of the most valuable species is low; and
o When valuable species in the crop are light demanders.
Coppice of Two Rotations System
Shelter wood Coppice System
1. Complete clear felling is done at the beginning of the 1st rotation.
2. Some poles are retained at the beginning of 2nd rotation to remain through out the 2nd notation
3. Object of retention is production of large sized timber.
4. It is applied in ordinary areas where simple coppice system is applied.
1. Some shelter woods as standards are retained as frost protection.
2. Standards (shelter wood) are retained for forest protection only till no longer required.
3. Object is forest protection.
4. It is applied in frosty localities.
Coppice of Two Rotation System
Coppice With Standards
1. The crop in the 1st rotation does not have any standards which are selected at the beginning of the 2nd rotation
2. Standards are selected out of the coppice crop and retained only for one extra rotation
3. Object is to simply produce large sized timber.
1. Standards are kept from the very beginning; they are composed of trees preferably of seedling origin.
2. Standards are seedling origin and are maintained for periods, which are, multiply of coppice rotation, as a permanent feature throughout the life of the crop.
3. Objects are- (i) Production of large sized timber (ii) Protection against frost (iii) Enrichment of coppice.
Shelter wood Coppice
Coppice With Standards
1. The standards are retained for only a part of each coppice rotation.
2. The standards are kept only for frost protection.
3. One rotation and that for coppice
1. The standards are retained as a permanent feature throughout the life of the crop.
2. The standards are kept for variety of reasons. As given above.
3. Two rotations - one for coppice
- other for standards
Coppice With Reserve
Coppice With Standard
1. The resultant crop can’t be differentiated into different storeys.
2. The crop is treated as a whole.
3. The reserves are selected both singly as well as in groups. Uniform spacing is not necessary.
4. The object of reserving tree is protection of soil, maintenance of soil fertility, supply of seed, fruit or any other economic forest produce.
5. The reserves are of several species.
1. The crop is composed of two storeys.
2. There are distinct treatments and rotations for under storey and upper storey .
3. The standards are selected individually and spaced uniformly over the area.
4. The object of retaining standards is production of large sized timber.
5. The standards are of one or two valuable species.
Choice of Silvicultural System
It has been observed that each system has some advantages and disadvantages. Therefore the choice of a system to be adopted for any species in any locality depends upon the relative advantages and disadvantages in these specific conditions. For a careful consideration of its choice, the question should be examined as under:
(i) Suitability of the system to the principal species - Only that system should be adopted which suits silvicultural requirements of the principal species. The most important factors that should be considered are light requirement, seeding and the ease of regeneration. If the principal species is a strong light demander, clear-felling system is suitable for it. On the other hand, shelter wood or selection system should be preferred for shade bearers.
( ii ) Topography and soil - Forests growing on rocky and precipitous slopes should be worked under Selection System as regeneration comes up only in limited pockets of soil. Similarly, slopes liable to erosion should not be worked under clear- felling system as clear- felling is liable to accelerate erosion.
( iii ) Resistance offered to external dangers. - Though resistance of forest crops to external dangers largely depends upon the species, the silvicultural system adopted may aggravate or reduce the danger. Where forest, drought or insect damage is considerable, shelter wood systems afford better protection than clear-felling systems; but in areas where the openings made in shelter wood systems are liable to induce dense weed growth, Selection System is the best.
( iv ) Object of management - If the object of management is production of fuel, small timber or even poles, any of the coppice systems may be applied with advantage for species which coppice. If, however, the object is to produce large sized quality timber, one of the high forest systems will be the choice, depending on other factors.
( v ) Economic considerations - Concentration of work reduces cost of felling, logging and extraction. From this point of view, systems based on concentrated felling and regeneration, offer a great advantage over selection system, which results in diffusion of work. Amongst the former, the clear-felling and simple coppice systems are the most advantageous, as the regeneration period is the shortest. The longer the regeneration period and greater the number of secondary fellings the lesser will be the advantage.
( vi ) Development of communications - Development of communications affects the choice of system as it affects the extraction costs. Forests situated in inaccessible areas with practically no roads, can only be worked under selection system, as extraction of smaller timber does not pay for the high extraction cost. Clear-felling System can be applied in areas that are well connected by roads.
( vii ) Availability of skilled staff and labor - Certain systems require greater skill in marking, felling and extraction than others. Therefore such systems can be adopted if adequate skilled staff is available. For instance, selection system if correctly applied, requires great skill not only in marking but also in felling and extraction of trees without damaging the immature crop growing under the trees being felled. Similarly, in the mixed forests, application of shelter wood system requires considerable skill in regenerating different species in proper or desired proportion. On the other hand, clear felling system does not require any considerable skill.
( viii ) Aesthetic considerations - From the point of view of aesthetic considerations, silvicultural systems which maintain a continuous cover such as the selection system, are the best. Clear felling system is the least desirable system from this point of view.
Amatya S.M & Nepal Forestry Handbook. Forestry Research Support Programme for
Shrestha K.R. (2002) Asia and Pacific, Bangkok.
HMG/N (1999) Forest Resources of Nepal (1987-1998). Department of Forest Research and Survey, Publication No. 74.
Khanna L. S. (1977) Principles and Practice of Silviculture. Khanna Bandhu, Dehradun,
Nyland R. D. (1996) Silviculture concepts and Applications. The McGraw – Hill
Ram Prakash & Theory and Practice of Silvicultural Systems. International Book
Khanna L. S. (1979) Distributors, Dehradun, India.
Shrivastava M.B.(1998) Introduction to Forestry. Vikash Publishing house Pvt. Ltd.
¹ Regeneration Felling: A felling made with a view to inviting or assisting regeneration. It includes:
o Seeding Felling – Opening the canopy of a mature stand to provide conditions for securing regeneration from the seed of trees retained for that purpose. It is the first stage of regeneration felling.
o Secondary Felling- A regeneration felling carried out between the seeding felling and the final felling in order to gradually remove the shelter and admit increasing light to regenerated crop. It is also called intermediate felling.
o Final Felling – The removal of the last seed or shelter trees after regeneration has been established. The final stage in the regeneration felling.
¹ Felling Cycle – The time which elapses between successive main felling on the same area.
² Pollarding - Cutting of a stem in order to obtain a flush of shoots, usually above the height to which the browsing animals can reach. Example- Salix | http://silvicultureonline7.blogspot.com/ | 13 |
117 | Smoothing algorithms. The simplest smoothing algorithm is the rectangular or unweighted sliding-average smooth; it simply replaces each point in the signal with the average of m adjacent points, where m is a positive integer called the smooth width. For example, for a 3-point smooth (m = 3):
The triangular smooth is like the rectangular smooth, above, except that it implements a weighted smoothing function. For a 5-point smooth (m = 5):
End effects and the lost points problem. Note in the equations above that the 3-point rectangular smooth is defined only for j = 2 to n-1. There is not enough data in the signal to define a complete 3-point smooth for the first point in the signal (j = 1) or for the last point (j = n) , because there are no data points before the first point or after the last point. (Similarly, a 5-point smooth is defined only for j = 3 to n-2, and therefore a smooth can not be calculated for the first two points or for the last two points). In general, for an m-width smooth, there will be (m-1)/2 points at the beginning of the signal and (m-1)/2 points at the end of the signal for which a complete m-width smooth can not be calculated. What to do? There are two approaches. One is to accept the loss of points and trim off those points or replace them with zeros in the smooth signal. (That's the approach taken in most of the figures in this paper). The other approach is to use progressively smaller smooths at the ends of the signal, for example to use 2, 3, 5, 7... point smooths for signal points 1, 2, 3,and 4..., and for points n, n-1, n-2, n-3..., respectively. The later approach may be preferable if the edges of the signal contain critical information, but it increases execution time. The fastsmooth function discussed below can utilize either of these two methods.
Examples of smoothing. A simple example of smoothing is shown in Figure 4. The left half of this signal is a noisy peak. The right half is the same peak after undergoing a triangular smoothing algorithm. The noise is greatly reduced while the peak itself is hardly changed. Smoothing increases the signal-to-noise ratio and allows the signal characteristics (peak position, height, width, area, etc.) to be measured more accurately, especially when computer-automated methods of locating and measuring peaks are being employed.
Figure 4. The left half of this signal is a noisy peak. The right half is the same peak after undergoing a smoothing algorithm. The noise is greatly reduced while the peak itself is hardly changed, making it easier to measure the peak position, height, and width directly by graphical or visual estimation, but it does not improve measurements made by least-squares methods (see below).
The larger the smooth width, the greater the noise reduction, but also the greater the possibility that the signal will be distorted by the smoothing operation. The optimum choice of smooth width depends upon the width and shape of the signal and the digitization interval. For peak-type signals, the critical factor is the smoothing ratio, the ratio between the smooth width m and the number of points in the half-width of the peak. In general, increasing the smoothing ratio improves the signal-to-noise ratio but causes a reduction in amplitude and in increase in the bandwidth of the peak.
The figures above show examples of the effect of three different
smooth widths on noisy Gaussian-shaped peaks. In the figure on the
left, the peak has a (true) height of 2.0 and there are 80 points
in the half-width of the peak. The red line is the original
unsmoothed peak. The three superimposed green lines are the
results of smoothing this peak with a triangular smooth of width
(from top to bottom) 7, 25, and 51 points. Because the peak width
is 80 points, the smooth ratios of these three smooths are
7/80 = 0.09, 25/80 = 0.31, and 51/80 = 0.64, respectively. As the
smooth width increases, the noise is progressively reduced but the
peak height also is reduced slightly. For the largest smooth, the
peak width is slightly increased. In the figure on the right, the
original peak (in red) has a true height of 1.0 and a half-width
of 33 points. (It is also less noisy than the example on the
left.) The three superimposed green lines are the results of the
same three triangular smooths of width (from top to bottom) 7, 25,
and 51 points. But because the peak width in this case is only 33
points, the smooth ratios of these three smooths are
larger - 0.21, 0.76, and 1.55, respectively. You can see that the
peak distortion effect (reduction of peak height and increase in
peak width) is greater for the narrower peak because the smooth
ratios are higher. Smooth ratios of greater than 1.0 are seldom
used because of excessive peak distortion. Note that even in the
worst case, the peak positions are not effected (assuming that the
original peaks were symmetrical and not overlapped by other
peaks). If retaining the shape of the peak is more important than
optimizing the signal-to-noise ratio, the Savitzky-Golay has the
advantage over sliding-average smooths.
It's important to point out that smoothing results such as
illustrated in the figures above may be deceptively optimistic because they employ a single sample of a noisy
signal that is smoothed to different degrees. Smoothing is
essentially a type of low-pass filtering that reduces the
high-frequency components of a signal while retaining the
low-frequency components. This causes the viewer to overestimate
the quality of a smoothed noisy signal, because one tends to underestimate the contribution of
low-frequency noise, which is hard to estimate visually
because there are so few low-frequency cycles in the signal
record. This error can be remedied by taking a large number
of independent samples of noisy signal. The same sort of error
occurs when least-squares methods
methods are used to measure the parameters such as the slope,
intercept, height, position, and width of noisy signals.
The figure on the right is another example
signal that illustrates some of these principles. You can
download the data file "udx" in TXT format
or in Matlab MAT format. The signal
consists of two Gaussian peaks, one located at x=50 and the second
at x=150. Both peaks have a peak height of 1.0 and a peak
half-width of 10, and a normally-distributed random white noise
with a standard deviation of 0.1 has been added to the entire
signal. The x-axis sampling interval, however, is different for
the two peaks; it's 0.1 for the first peaks and 1.0 for the second
peak. This means that the first peak is characterized by ten
times more points that the second peak. It may look like the first peak is
noisier than the second, but that's just an illusion; the
signal-to-noise ratio for both peaks is 10. The second peak looks
less noisy only because there are fewer noise samples there and we
tend to underestimate the dispersion of small samples. The result
of this is that when the signal is smoothed, the second peak is
much more likely to be distorted by the smooth (it becomes shorter
and wider) than the first peak. The first peak can tolerate a much
wider smooth width, resulting in a greater degree of noise
reduction. (Similarly, if both peaks are measured with the peakfit method, the results
on the first peak will be about 3 times more accurate than the
second peak, because there are 10 times more data points in that
peak, and the measurement precision improves roughly with the
square root of the number of data points if the noise is
Optimization of smoothing. Which is the best smooth ratio? It depends on the purpose of the peak measurement. If the objective of the measurement is to measure the true peak height and width, then smooth ratios below 0.2 should be used. (In the example on the left above, the original peak (red line) has a peak height greater than the true value 2.0 because of the noise, whereas the smoothed peak with a smooth ratio of 0.09 has a peak height that is much closer to the correct value). Measuring the height of noisy peaks is much better done by curve fitting the unsmoothed data rather than by taking the maximum of the smoothed data (see CurveFittingC.html#Smoothing). But if the objective of the measurement is to measure the peak position (x-axis value of the peak), much larger smooth ratios can be employed if desired, because smoothing has no effect at all on the peak position (unless the increase in peak width is so much that it causes adjacent peaks to overlap).
In quantitative analysis applications, the peak height reduction caused by smoothing is not so important, because in most cases calibration is based on the signals of standard solutions. If the same signal processing operations are applied to the samples and to the standards, the peak height reduction of the standard signals will be exactly the same as that of the sample signals and the effect will cancel out exactly. In such cases smooth widths from 0.5 to 1.0 can be used if necessary to further improve the signal-to-noise ratio. In practical analytical chemistry, absolute peak height measurements are seldom required; calibration against standard solutions is the rule. (Remember: the objective of quantitative analysis is not to measure a signal but rather to measure the concentration of the analyte.) It is very important, however, to apply exactly the same signal processing steps to the standard signals as to the sample signals, otherwise a large systematic error may result.
For a comparison of all four smoothing types considered above,
When should you smooth a signal? There are two reasons to smooth a signal: (1) for cosmetic reasons, to prepare a nicer-looking graphic of a signal for visual inspection or publication, and (2) if the signal will be subsequently processed by an algorithm that would be adversely effected by the presence of too much high-frequency noise in the signal, for example if the heights of peaks are to be determined graphically or by using the MAX function, or if the location of maxima, minima, or inflection points in the signal is to be automatically determined by detecting zero-crossings in derivatives of the signal. Optimization of the amount and type of smoothing is very important in these cases (see Differentiation.html#Smoothing).
Care must be used in the design of algorithms that employ smoothing. For example, in a popular technique for peak finding and measurement, peaks are located by detecting downward zero-crossings in the smoothed first derivative, but the position, height, and width of each peak is determined by least-squares curve-fitting of a segment of original unsmoothed data in the vicinity of the zero-crossing. Thus, even if heavy smoothing is necessary to provide reliable discrimination against noise peaks, the peak parameters extracted by curve fitting are not distorted by the smoothing.
should you NOT smooth a signal? One common situation
where you should not
smooth signals is prior to statistical procedures such as least-squares curve fitting,
because: (a) smoothing will not significantly improve the accuracy
of parameter measurement by least-squares measurements between
separate independent signal samples; (b) all smoothing algorithms
are at least slightly "lossy", entailing at least some change
in signal shape and amplitude, (c) it is harder to evaluate the
fit by inspecting the residuals if the data are smoothed, because
smoothed noise may be mistaken
for an actual signal, and (d) smoothing the signal will
seriously underestimate the parameters errors predicted by propagation-of-error
calculations and the bootstrap
method. Smoothing can be used to locate peaks
but it should not be used to measure peaks.
Dealing with spikes. Sometimes signals are contaminated with very tall, narrow “spikes” occurring at random intervals and with random amplitudes, but with widths of only one or a few points. It not only looks ugly, but it also upsets the assumptions of least-squares computations because it is not normally-distributed random noise. This type of interference is difficult to eliminate using the above smoothing methods without distorting the signal. However, a “median” filter, which replaces each point in the signal with the median (rather than the average) of m adjacent points, can completely eliminate narrow spikes with little change in the signal, if the width of the spikes is only one or a few points and equal to or less than m. It can be applied prior to least-squares functions. See http://en.wikipedia.org/wiki/Median_filter.
Condensing oversampled signals. Sometimes signals are recorded more densely (that is, with smaller x-axis intervals) than really necessary to capture all the features of the signal. This results in larger-than-necessary data sizes, which slows down signal processing procedures and may tax storage capacity. To correct this, oversampled signals can be reduced in size either by eliminating data points (say, dropping every other point or every third point) or by replacing groups of adjacent points by their averages. The later approach has the advantage of using rather than discarding extraneous data points, and it acts like smoothing to provide some measure of noise reduction. (If the noise in the original signal is white, and the signal is condensed by averaging every n points, the noise is reduced in the condensed signal by the square root of n, with no change in frequency distribution of the noise).
Video Demonstration. This 18-second, 3 MByte video (Smooth3.wmv) demonstrates the effect of triangular smoothing on a single Gaussian peak with a peak height of 1.0 and peak width of 200. The initial white noise amplitude is 0.3, giving an initial signal-to-noise ratio of about 3.3. An attempt to measure the peak amplitude and peak width of the noisy signal, shown at the bottom of the video, are initially seriously inaccurate because of the noise. As the smooth width is increased, however, the signal-to-noise ratio improves and the accuracy of the measurements of peak amplitude and peak width are improved. However, above a smooth width of about 40 (smooth ratio 0.2), the smoothing causes the peak to be shorter than 1.0 and wider than 200, even though the signal-to-noise ratio continues to improve as the smooth width is increased. (This demonstration was created in Matlab 6.5.
Diederick has published a Savitzky-Golay smooth function in Matlab, which you can download from the Matlab File Exchange.
Here's a simple experiment in Matlab or Octave that creates a Gaussian peak, smooths it, compares the smoothed and unsmoothed version, then uses the peakfit.m function (version 3.4 or later) to show that smoothing reduces the peak height (from 1 to 0.786) and increases the peak width (from 1.66 to 2.12), but has little effect on the total peak area (a mere 0.2% change). In fact, there is no need to smooth the data if the peak height, position, and/or width will be measured by least-squares methods, because the results obtained on the unsmoothed data will be more accurate (see CurveFittingC.html#Smoothing).>> x=[0:.1:10]';
The Matlab/Octave user-defined function medianfilter.m, medianfilter(y,w), performs a median-based filter operation that replaces each value of y with the median of w adjacent points (which must be a positive integer).
ProcessSignal, a Matlab/Octave command-line function that performs smoothing and differentiation on the time-series data set x,y (column or row vectors). It can employ all the types of smoothing described above. Type "help ProcessSignal". Returns the processed signal as a vector that has the same shape as x, regardless of the shape of y. The syntax is Processed=ProcessSignal(x,y,DerivativeMode,w,type,ends,Sharpen,factor1,factor2,SlewRate,MedianWidth)
iSignal is an interactive function for Matlab that performs smoothing for time-series signals using all the algorithms discussed above, including the Savitzky-Golay smooth, with keystrokes that allow you to adjust the smoothing parameters continuously while observing the effect on your signal instantly. Version 2.2 also includes a median filter and a condense function. Other functions include differentiation, peak sharpening, and least-squares peak measurement. View the code here or download the ZIP file with sample data for testing.
iSignal for Matlab. Click to view larger figures.
Note: you can right-click on any of the m-file links on this site and select Save Link As... to download them to your computer for use within Matlab. Unfortunately, iSignal does not currently work in Octave. | http://terpconnect.umd.edu/~toh/spectrum/Smoothing.html | 13 |
51 | Basics of Slope Calculations (SD - HD - Elevation)
Slope Measures - Units
Slope is a measure of steepness. Units can be in degrees, percent or as a ratio.
Degrees: Most of us are familiar with slopes measured in degrees. There are 360 degrees in a full circle. From the perspective of traversing we are essentially interested in measures between 0° and 90°. A measure of 0° indicates flat ground and a slope of 90˚ is essentially a vertical line (i.e. perpendicular to the horizon, or "straight up"). A slope of 45˚ is exactly half way between the previous two measures. With a 45˚ slope, a movement of 10 meters horizontal means you also moved 10 meters vertical (see diagram below).
We can see that for every step forward along the 45˚ slope, an equal increment is made both horizontal and vertical.
Slopes can also be expressed as a ratio or percent. The calculation is the familiar "rise / run".
As a ratio:
In the case above we have a rise and a run of 10 m each.
Slope = rise / run = 10m / 10m = 1.0
If we had a rise of 3 m and a run of 10 m, then
Slope = rise / run = 3m / 10m = 0.3
You should note that rise/run is the same as opposite/adjacent ... which is the same as tangent. Tangent is a ratio, it expresses vertical rise as a ratio (or "%") of horizontal distance. (This will be important later when we want to calculate change in elevation from our traverse notes). You can determine the 'tangent ratio' is two ways"
- if lengths are known, then use rise / run (= opposite / adjacent = elevation change / horizontal distance)
- if angle is known, then simply use the tan button on your calculator (it simply converts degrees to a ratio)
As a percent:
To convert a ratio to a percent we simply multiply by 100. Expressed as a percent the first measure would be 100% (1.0 * 100) and the second measure would be 30% (0.3 * 100).
Thus the angle in the diagram above can be expressed as 45˚ or 1.0 (ratio) or 100%.
Slope Measures - Instrument
To measure slope inclination in the field we use a clinometer (Suunto is the most common make). This device provides measures in both degrees and percent. The numbers on the left side are in degrees and the numbers on the right are in percent. (You can always double check this by 'looking up' in clinometer until the reading is 45 on one side and 100 on the other (the 100 indicating the % side).
Converting Slope Measures Between Percent and Degrees
Sometimes we need to be able to convert slope percent to degrees ...
Arctan (or inverse tangent) is the opposite of tangent. Remember tangent converts an angle to a ratio - well ... arctan converts a ratio to an angle. Forget all the mathematical proofs … to convert from slope ratio (or percent) to slope degrees you will use Arctan. Please do not be afraid (“step away from that Course Drop Form”) – it is simply a button on your calculator that will magically convert slope percent to degrees. Just so you know, 32 degrees is equivalent to 63% slope (try looking into your clinometer to check this). To practice on your calculator ... punch “0.63” into your calculator, then punch the button(s) for “arctan” – you will get ~32 (degrees). So, the equation to convert slope percent to degrees is:
Slope degrees = arctan (slope percent) – but remember you need to enter slope percent as a ratio, thus 63% is entered as 0.63
Convert SD to HD
When we traverse in the 'real world' we take measures of slope distance (SD) and steepness (slope %). However, in order to map features in their proper place we need to convert SD to HD (horizontal distance).
Measures of slope in degrees are useful in converting slope distance to horizontal distance. Remember cosine is “adjacent over hypotenuse”. If you consider “adjacent” = horizontal distance and “hypotenuse” = slope distance, then cosine = HD / SD. Think of cosine as the ratio of horizontal distance to slope distance. If this ratio (i.e. cosine) = 0.85, then for every 1 m of slope distance you are actually traveling only 0.85 m horizontal. A slope of 32° has a cosine of 0.85. (Try 'plugging' 32 into your calculator and then press the COS button - you should get 0.848).
Converting SD to HD: Consider that you have traveled 25 m slope distance with a slope angle of 32º. What is the horizontal distance?
HD = SD * cosine (slope degrees)
= 25 m * cosine(32)
= 25 m * 0.85
= 21.2 m horizontal
Now remember that we record slopes in percent, not degrees. So we will need to first convert slope percent to slope degrees. As per the previous section, we Arctan to accomplish this.
Example: our raw field measures are: SD = 25 m and slope = 63% ... calculate HD. Stepwise we
In equation form ...
HD = SD * cosine * [arctan * (slope ratio)]
Our raw data is SD =25 m, and slope = 63% ... (last time our slope was in degrees, this time it is in the conventional percent)
HD = 25m * cos [arctan (0.63)]
= 25m * cos [ 32º ]
= 25m * 0.85
= 21.2 m
Convert HD to SD
Sometimes we need to solve for HD. This is often the case when we want to establish plots at fixed intervals (e.g. 100 m grid). These intervals are typically in HD. To solve for slope distance, the above equation is simply rearranged:
SD = HD / cosine [arctan (slope ratio)]
using the same data, assume you need to go 21.2 m horizontal distance to get to plot centre and the slope angle was 63% …
SD = 21.2 m / cosine [arctan (0.63)]
= 21.2 m / cos [ 32º ]
= 25 m
Determine change in elevation
Measures of slope in degrees are useful for converting slope distance to horizontal distance, but percent is easier to use to calculate change in elevation. Remember that tan is slope is expressed as a ratio ... "rise / run". In other words, rise (elevation change) expressed as a ratio (%) of horizontal distance. The equation is simply ...
Elev. Change = HD * slope ratio
In our example the raw data was SD =25 m, and slope = 63%. We calculated HD to be 21.2 m. Thus to calculate elevation change ...
= HD * slope ratio
= 21.2 m * 0.63
= 13.2 m
Thus, for a SD of 25 m @ 63% slope you traveled 21.2 m horizontal and had an elevation change of 13.2 m. | http://web.viu.ca/corrin/FRST121/Help/SlopeHelp.htm | 13 |
72 | The Axis powers were those states opposed to the Allies during the Second World War. The three major Axis Powers, Nazi Germany, Fascist Italy and the Empire of Japan were part of an alliance. At their zenith, the Axis Powers ruled empires that dominated large parts of Europe, Asia, Africa and the Pacific Ocean, but the Second World War ended with their total defeat. Like the Allies, membership of the Axis was fluid, and some nations entered and later left the Axis during the course of the war.
The term was first used by Benito Mussolini, in November 1936, when he spoke of a Rome-Berlin axis arising out of the treaty of friendship signed between Italy and Germany on October 25, 1936. Mussolini declared that the two countries would form an "axis" around which the other states of Europe would revolve. This treaty was forged when Italy, originally opposed to Germany, was faced with opposition to its war in Abyssinia from the League of Nations and received support from Germany. Later, in May 1939, this relationship transformed into an alliance, called by Mussolini the "Pact of Steel".
The term "Axis Powers" formally took the name after the Tripartite Treaty was signed by Germany, Italy and Japan on September 27, 1940 in Berlin, Germany. The pact was subsequently joined by Hungary (November 20, 1940), Romania (November 23, 1940), Slovakia (November 24, 1940) and Bulgaria (March 1, 1941). The Italian name Roberto briefly acquired a new meaning from "Rome-Berlin-Tokyo" between 1940 and 1945. Its most militarily powerful members were Germany and Japan. These two nations had also signed the Anti-Comintern Pact with each other as allies before the Tripartite Pact in 1936.
Major Axis PowersEdit
The three major Axis powers were the original signatories to the Tripartite Pact:
Germany was the principal Axis power in Europe. Its official name was Deutsches Reich meaning German Empire, and after 1943, Grossdeutsches Reich meaning Greater German Empire, but during this period is most commonly known as Nazi Germany after its ruling National Socialist party.
At the start of the Second World War Germany included Austria, with which it was united in 1938 and the Sudetenland, which was ceded by Czechoslovakia in 1938, and Memelland which was ceded by Lithuania in 1939. The Protectorate of Bohemia-Moravia, created in 1939, was de facto part of Germany, although technically a Czech state under German protection.
Germany annexed additional territory during the course of the Second World War. On September 2, 1939, the day after the German invasion of Poland, the pro-Nazi government of the Free City of Danzig voted to reunite with Germany. On October 10, 1939, after the defeat and occupation of Poland, Hitler issued decrees annexing the Polish Corridor, West Prussia and Upper Silesia, formerly German territories lost to Poland under the terms of the Treaty of Versailles. The remainder of the country was organised into the "Government General for the Occupied Polish Territories".
On its western frontier, Germany made additional annexations after its defeat of France and occupation of Belgium, Netherlands and Luxembourg in 1940. Germany immediately annexed the predominately German Eupen-Malmedy from Belgium in 1940, placing the rest of the country under military occupation. Luxembourg, an independent grand duchy formerly associated with Germany, was formally annexed in 1942. Alsace-Lorraine, a region claimed by both Germany and France for centuries, was likewise annexed in 1942. In the Balkans, Slovenia was annexed in 1941 from the former Yugoslavia.
After the German invasion of the Soviet Union in 1941, Greater Germany was enlarged to include parts of Poland occupied by the USSR in 1939. Other territories occupied by the Germans were subject to separate civilian commissariats or to direct military rule.
It would not be for another four years until many nations managed to reduce the Nazi war machine.
Japan was the principal Axis power in Asia and the Pacific. Its official name was Dai Nippon Teikoku meaning Empire of Greater Japan, known commonly as Imperial Japan for its imperial ambitions toward Asia and the Pacific.
Japan was ruled by Emperor Hirohito and Prime Minister Hideki Tojo, and during the last days of the war, Prime Ministers Kuniaki Koiso and Kantaro Suzuki. Japan deployed most of its troops fighting in China proper, and was also the enemy of both the Americans fighting in the Pacific War and also the British fighting in Burma . Just days before the war ended, the Soviet Union also engaged Japanese forces in Manchukuo during Operation August Storm. Japan's first involvement in World War II was a strike against the Republic of China, headed by General Chiang Kai-shek, on July 7, 1937. Even though not officially involved, many Americans rushed to help the Chinese, and American airmen helped the Chinese air force. The United States also instituted embargoes to stop supplying Japan with raw materials needed for the war in China. This caused the Japanese to strike on the Pearl Harbor naval base in Hawaii, on December 7, 1941, to destroy Allied presence in the Pacific and to secure raw material in Southeast Asia. The following day Roosevelt asked the US Congress to declare war on Japan, saying that December 7 would be "a date which will live in infamy." The Congress willingly complied, and the Pacific War began, lasting until the atomic bombings of Hiroshima and Nagasaki in 1945.
At its height, Japan's empire included Manchuria, Inner Mongolia, some of China, Malaysia, French Indochina, Dutch East Indies, The Philippines, Burma, some of India, and various other Pacific Islands (Iwo Jima, Okinawa).
Fascist Italy was the other European power member of the Axis, belonging to the Axis in two incarnations, both under the leadership of Il Duce Benito Mussolini. Its first incarnation was officially known as Regno d'Italia meaning Kingdom of Italy.
The Kingdom of Italy was ruled by Mussolini in the name of King Victor Emmanuel III. Victor Emmanuel III was additionally Emperor of Abyssinia and King of Albania. Abyssinia had been occupied by Italian troops in 1936 and incorporated into the Italian colony of Italian East Africa. Albania was occupied by Italian troops in 1939 and joined in "personal union" with Italy when Victor Emmanuel III was offered the Albanian crown. Other Italian colonies included Libya and the Dodecanese Islands.
The second incarnation of Fascist Italy was officially known as Repubblica Sociale Italiana meaning Italian Social Republic. On July 25, 1943, after Italy had lost control of its African colonies and been subjected to Anglo-American invasion of its mainland, King Victor Emmanuel III dismissed Mussolini, placed him under arrest and began secret negotiations with the Allies. When Italy switched sides in the war in September 1943, Mussolini was rescued by the Germans, and later announced the formation of the Italian Social Republic in Northern Italy.
Several minor powers formally adhered to the Tripartite Pact between Germany, Italy and Japan in this order:
Hungary was allied to Germany during the First World War by virtue of her being a constituent kingdom of the Austro-Hungarian Monarchy. Hungary suffered much the same fate as Germany, with the victorious powers stripping the kingdom of more than 70 percent of her pre-war sovereign territory, which was then distributed to neighbouring states, some newly created in accordance with the Treaty of Trianon. Horthy, a Hungarian nobleman and Austro-Hungarian naval officer, became Regent in 1920, ruling the kingdom in the absence of an acknowledged king.
Hungary's foreign policy under Horthy was driven by the ambition to recover the territories lost through the imposition on her of the Trianon Treaty. Hungary drew closer to Germany and Italy largely because of the shared desire to revise the peace settlements made after the First World War.
Hungary participated in the German partition of Czechoslovakia, signed the Tripartite Pact, and was rewarded by Germany in the Vienna Awards which restored some of the territories taken from her by the Trianon Treaty.
Following political upheaval in Yugoslavia which threatened its continued membership in the Tripartite Pact, Hungary permitted German troops to transit its territory for a military invasion and occupation of that country. On April 11, 1941, five days after Germany invaded Yugoslavia and had largely destroyed the Yugoslav army, Hungary invaded Yugoslavia, occupying border territories. Hungary participated in the partition of Yugoslavia. Great Britain immediately broke off diplomatic relations with Hungary.
Hungary was not asked to participate in the German invasion of the Soviet Union, which began on June 22, 1941 with attacks from German, Finnish and Romanian forces as well as a declaration of war by Italy. Currying favour with Germany, Hungary declared war on the Soviet Union five days later on June 27, 1941. Hungary raised over 200,000 troops for Eastern Front, and all three of its field armies participated in the war against the Soviet Union, although by far the largest and the most significant was the Hungarian Second Army.
On November 26, 1941, Hungary was one of 13 signatories to the revived Anti-Comintern Pact. The other sigatories were: Germany, Japan, Italy, Spain, Manchukuo, Bulgaria, Croatia, Denmark, Finland, Romania, Slovakia, and the Nanking regime of Wang Chingwei.
On December 6, 1941, Great Britain declared war on Hungary. Several days later, Hungary declared war on Great Britain and the United States of America. The United States declared war on Hungary in 1942.
Hungarian troops advanced far into Soviet territory, but in the Soviet counteroffensive of 1943, the Hungarian Second Army was almost completely annihilated in fighting near Voronezh on the banks of the Don River.
In 1944, as Soviet troops neared Hungarian territory, German troops occupied Hungary. After the German occupation of Hungary, Horthy was forced to abdicate after his son was kidnapped by the Germans. Hitler and Horthy had disagreed on the way to handle Hungarian Jews. In Horthy's place Ferenc Szalasi head of the Fascist Arrow Cross was put in control of Hungary. When Soviet troops entered Budapest he fled to Austria and in 1946 was returned to Hungary and hanged for war crimes.
The Hungarian First Army continued to fight the Red Army even after Hungary had been completely occupied by the Soviet Union, not disbanding until May 8, 1945. Hungary remained as the last fighting Tripartite ally of Germany-Japan.
Romania entered the First World War in 1916 on the Allied side but was quickly defeated, its territory overrun by troops from Germany, Austria-Hungary, Bulgaria and the Ottoman Empire. Romania became a German vassal under the Treaty of Bucharest, but when Germany itself suffered defeat in the West, the Treaty of Bucharest was voided. Romania then saw its borders greatly enlarged in the peace treaties imposed on Germany and her allies.
The Soviet Union, Hungary and Bulgaria exploited the fall of France to revise the terms of those peace treaties, reducing Romania in size. On June 28, 1940, the Soviet Union occupied and annexed Bessarabia and Northern Bukovina. Germany forced Romania to relinquish Transylvania to Hungary on August 30, 1940 in the second Vienna Award. Germany also forced Romania to cede Southern Dobruja to Bulgaria on September 5, 1940.
In an effort to please Hitler and obtain German protection, King Carol II appointed the General Ion Antonescu Prime Minister on September 6, 1940. Two days later, Antonescu forced the king to abdicate, installed his young son Michael on the throne, and declared himself Conducător (Leader) with dictatorial powers.
German troops entered the country in 1941, and used it as a base for its invasions of both Yugoslavia and the Soviet Union. Romania was also a key supplier of resources, especially oil and grain.
Romania joined Germany in invading the Soviet Union on June 22, 1941. Not only was Romania a base for the invasion, the country contributed nearly 300,000 troops - more than any other minor Axis power - to the war against the Soviet Union. German and Romanian troops quickly overran Moldova, which was again incorporated into Romania. Romania made additional annexations of Soviet territory as far east as Odessa and Romanian armies 3 and 4 were involved even in the battle of Stalingrad.
After the Soviets turned back the German invasion and prepared to attack Romania, Romania switched to the Allied side on August 23, 1944.
Slovakia had been closely aligned with Germany almost immediately from its declaration of independence from Czechoslovakia on March 14, 1939. Slovakia entered into a treaty of protection with Germany on March 23, 1939. Slovak troops joined the German invasion of Poland, fighting to reclaim territories lost in 1918.
Slovakia declared war on the Soviet Union in 1941 and signed the revived Anti-Comintern Pact of 1941. Slovak troops fought on Germany's Eastern Front, with Slovakia furnishing Germany with two divisions totalling 20,000 men. Slovakia declared war on Great Britain and the United States of America in 1942.
After the war, Tiso was executed and Slovakia was rejoined with Czechoslovakia. Slovakia regained its independence in 1993.
Bulgaria, under its king Boris III, signed the Tripartite Pact on March 1, 1941. Bulgaria had been an ally of Germany in the First World War, and like Germany and Hungary, sought a revision of the peace terms, specifically the restoration of the San Stefano Treaty lands.
Like the other Balkan nations, Bulgaria drew closer to Nazi Germany during the 1930s. In 1940, under the terms of the Treaty of Craiova, Germany forced Romania to cede Southern Dobrudja to Bulgaria.
Bulgaria participated in the German invasion of Yugoslavia and Greece, and annexed Vardar Banovina from Yugoslavia and Western Thrace from Greece. However, Bulgaria did not join the German invasion of the Soviet Union and didn't declare war. Despite the lack of official declarations of war by both sides, the Bulgarian Navy was involved in a number of skirmishes with the Soviet Black Sea Fleet, which attacked Bulgarian shipping. Besides this, Bulgarian armed forces garrisoned in the Balkans battled various resistance groups.
As the war progressed Bulgaria declared war on United States and United Kingdom. The 'symbolic' war against the Western Allies, however, turned into a disaster for the citizens of Sofia and other major Bulgarian cities, as they were heavily bombed by the USAF and RAF in 1943 and 1944.
As the Red Army approached the Bulgarian border, on September 9 1944, a coup brought to power a new government of the pro-Allied Fatherland Front. Bulgaria switched sides and was permitted to keep Southern Dobrudja after the war.
Prince Paul adhered to the Tripartite Pact on March 25, 1941, but was removed from office two days later by a coup that ended his regency. The new Yugoslav government declared that it would be bound by the treaty, but Hitler suspected that the British were behind the coup against Prince Paul and vowed to destroy the country.
The German invasion began on April 6, 1941, and after two weeks of resistance, the country was completely occupied. Croatian nationalists declared the independence of Croatia on April 10, 1941 as the "Independent State of Croatia" and enthusiastically joined the Axis. The government of Serbia was reorganised as the "National Government of Salvation" under General Milan Nedić on September 1, 1941. Nedić maintained that his Serb government was the lawful successor to the Kingdom of Yugoslavia and his troops wore the uniform of the Royal Yugoslav Army, but unlike the generous treatment accorded the Independent State of Croatia, the German treated Nedić's Serbia as a puppet state.
The remainder of Yugoslavia was divided among the other Axis powers. Germany annexed Slovenia. Italy annexed Dalmatia, and Albania annexed Montenegro. Hungary annexed border territories, and Bulgaria annexed Macedonia.
Ivan Mihailov's Internal Macedonian Revolutionary Organization (IMRO) welcomed the Bulgarian annexation of Vardar Macedonia. In early September 1944, when the Bulgarian government left the Axis, Germany offered Mihailov support to declare Macedonia's independence, but he declined.
Declared on April 10, 1941, the Independent State of Croatia (Nezavisna Država Hrvatska or NDH) was a member of the Axis powers until the end of Second World War, its forces fighting for Germany even after Croatia had been overrun by the Soviets. Ante Pavelić, a Croatian nationalist and one of the founders of the Croatian Uprising (Ustaše) Movement, was proclaimed Leader (Poglavnik) of the new state on April 24, 1941.
Pavelic led a Croatian delegation to Rome and offered the crown of Croatia to an Italian prince of the House of Savoy, who was crowned Tomislav II, King of Croatia, Prince of Bosnia and Herzegovina, Voivode of Dalmatia, Tuzla and Temun, Prince of Cisterna and of Belriguardo, Marquess of Voghera, and Count of Ponderano. The next day, Pavelic signed the Contracts of Rome with Mussolini, ceding Dalmatia to Italy and fixing the permanent borders between Croatia and Italy. He was also received by the Pope.
Pavelić formed the Croatian Home Guard (Hrvatsko domobranstvo) as the official military force of Croatia. Originally authorized at 16,000 men, it grew to a peak fighting force of 130,000. The Croatian Home Guard included a small air force and navy, although its navy was restricted in size by the Contracts of Rome. In addition to the Croatian Home Guard, Pavelić also commanded the Ustaše militia. A number of Croats also volunteered for the German Waffen SS.
The Ustaše government declared war on the Soviet Union, signed the Anti-Comintern Pact of 1941 and sent troops to Germany's Eastern Front. Ustaše militia garrisoned the Balkans, battled the Yugoslav Partisans (Titove Partizane među kojima je bilo najviše hrvata),Yugoslav Partisans were mostly Soviet Croats, and freed up German and Italian forces to fight elsewhere.
During the time of its existence, the Ustaše government applied racial laws on Serbs, Jews and Romas, and after June 1941 deported them to the concentration camp at Jasenovac (or to camps in Poland). The number of victims of the Ustaše regime is a mystery due to numbers given by various historians vying for political clout. The number of total victims is between 300,000 and 1,000,000. The racial laws were enforced by the Ustaše militia.
Thailand was an ally and co-belligerent of Japan.
In the immediate aftermath of the attack on Pearl Harbor, Japan invaded Thailand on the morning of December 8, 1941. Only hours after the invasion, Field Marshal Phibunsongkhram, the prime minister, ordered the cessation of resistance. On December 21, 1941, a military alliance with Japan was signed and on January 25, 1942 Thailand declared war on Britain and the United States of America. The Thai ambassador to the United States, Mom Rajawongse Seni Pramoj did not deliver his copy of the declaration of war, so although the British reciprocated by declaring war on Thailand and consequently considered it a hostile country, the United States did not.
On May 10, 1942, the Thai Phayap Army entered Burma's Shan State. At one time in the past the area had been part of the Ayutthaya Kingdom. The boundary between the Japanese and Thai operations was generally the Salween. However, that area south of the Shan States known as Karenni States, the homeland of the Karens, was specifically retained under Japanese control.
Three Thai infantry and one cavalry division, spearheaded by armoured reconnaissance groups and supported ably by the air force, started their advance on May 10, and engaged the retreating Chinese 93rd Division. Kengtung, the main objective, was captured on May 27. Renewed offensives in June and November evicted the Chinese into Yunnan.
As the war dragged on, the Thai population came to resent the Japanese presence. In June 1944, Phibun was overthrown in a coup d'état. The new civilian government under Khuang Aphaiwong attempted to aid the resistance while at the same time maintaining cordial relations with the Japanese.
The Free Thai Movement ("Seri Thai") was established during these first few months. Parallel Free Thai organisations were established in Britain and inside Thailand. Queen Ramphaiphanni was the nominal head of the Britain-based organisation, and Pridi Phanomyong, the regent, headed its largest contingent, which was operating within the country. Aided by elements of the military, secret airfields and training camps were established while OSS and Force 136 agents fluidly slipped in and out of the country.
After the war, U.S. influence prevented Thailand from being treated as an Axis country, but Britain demanded three million tons of rice as reparations and the return of areas annexed from the British colony of Malaya during the war and invasion. Thailand also had to return the portions of British Burma and French Indochina that had been taken.
Phibun and a number of his associates were put on trial on charges of having committed war crimes, mainly that of collaborating with the Axis powers. However, the charges were dropped due to intense public pressure. Public opinion was favourable to Phibun, since he was thought to have done his best to protect Thai interests.
Finland was a co-belligerent of Germany in its war against the Soviet Union. An avowed enemy of Bolshevism having recently fought the Winter War against the Soviets, Finland allowed Germany to use Finnish territory as a base for Operation Barbarossa.
After its loss of the Winter War to the Soviet Union in March 1940, Finland first sought protection from Great Britain and neutral Sweden, but was thwarted by Soviet and German actions. This resulted in Finland drawing closer to Germany, first with an intent of enlisting German support as a counterweight to thwart continuing Soviet pressure, but later to help regain its lost territories.
Finland's role in Operation Barbossa was laid out in German Chancellor Adolf Hitler's Directive 21, "The mass of the Finnish army will have the task, in accordance with the advance made by the northern wing of the German armies, of tying up maximum Russian strength by attacking to the west, or on both sides, of Lake Ladoga. The Finns will also capture Hanko." The directive was given December 18, 1940, over two months before Finnish High Command or civilian leadership received the first tentative hints to upcoming invasion.
In May 1941, at the suggestion of Germany, Finland allowed Germany to recruit Finnish volunteers for SS-Freiwilligen-Bataillon Nordost. This battalion, with an initial strength of 1200 men, was attached to the multinational Wiking Division of Germany's Waffen SS. Later, an additional 200 Finns joined the battalion to cover the losses.
In the weeks leading up to Operation Barbossa, cooperation between Finland and Germany increased, with the exchange of liaison officers and the beginning of preparations for joint military action. On June 7, Germany moved two divisions into the Finnish Lapland. On June 17, 1941, Finland ordered its armed forces to be fully mobilized and sent to the Soviet border. Finland evacuated civilians from border areas which were fortified against Soviet attack. In the opening days of the Operation, Finland permitted German planes returning from bombing runs over Leningrad to refuel at Finnish airfields before returning to bases in German East Prussia. Finland also permitted Germany to use its naval facilities in the Gulf of Finland.
In his proclamation of war against the Soviet Union issued June 22, 1941, Hitler declared that Germany was joined by Finland and Romania. However, Finland did not declare war until June 25, after the Soviet Union bombed Finnish airfields and towns, including the medieval Turku castle, which was badly damaged. The Soviets cited Finland's cooperation with Germany as provocation for the air raids. Finland countered that it was once again a victim of Soviet aggression.
Finns refer to the conflict with the Soviet Union as the Continuation War, viewing it as continuation of the Winter War that the Soviets had waged against the Finns. The Finns maintain that their sole objective was to regain the territory lost to the Soviet Union in the Winter War, but on July 10, 1941, Field Marshal Carl Gustaf Emil Mannerheim issued an Order of the Day declaring that the war aim of the Finns was "to expel the Bolsheviks out of Russian Karelia, to liberate the Karelian nations and to accord to Finland a great future."
Mannerheim's order echoed his Order of the Day issued February 23, 1918, during the Finnish War of Independence, known as the Sword Scabbard Declaration, in which Mannerheim declared he "would not put his sword into the scabbard until East Karelia was free of Lenin's warriors and hooligans." Conquest of Karelia was a historic dream of Finnish nationalists advocating Greater Finland.
Finland mobilized over 475,000 men for Germany's Eastern Front against the Soviet Union. About 1,700 volunteers from Sweden and 2,600 from Estonia served in the Finnish army. Many of the Swedish volunteers had also fought for Finland in the Winter War.
Diplomatic relations between Great Britain and Finland were severed on August 1, 1941, after the British bombed German forces in the Finnish city of Petsamo. Great Britain repeatedly called on Finland to cease its offensive against the Soviet Union, and on December 6, 1941, declared war on Finland. War was never declared between Finland and the United States.
Finland signed the revived Anti-Comintern Pact of 1941. Unlike other Axis powers, Finland maintained command of its armed forces and pursued its war objectives independently of Germany. Finland refused German requests to participate in the Siege of Leningrad, stating that capturing Leningrad was not among its goals. Leningrad, now St. Petersburg, lies outside the territory of Karelia claimed for Finland by Mannerheim. Finland also granted asylum to Jews, and Jewish soldiers continued to serve in her army.
The relationship between Finland and Germany more closely resembled an alliance during the six weeks of the Ryti-Ribbentrop Agreement, which was presented as a German condition for help with munitions and air support, as the Soviet offensive coordinated with D-Day threatened Finland with complete occupation. The agreement, signed by President Risto Ryti, but never ratified by the Finnish Parliament, bound Finland not to seek a separate peace.
Ryti's successor, President Mannerheim, ignored the agreement and opened secret negotiations with the Soviets. On September 19, 1944, Mannerheim signed an armistice with the Soviet Union and Great Britain. Under the terms of the armistice, Finland was obligated to expel German troops from Finnish territory. Finns refer to the skirmishes that followed as the Lapland War. In 1947, Finland signed a peace treaty with the Soviet Union, Great Britain and several British Commonwealth nations acknowledging its "alliance with Hitlerite Germany".
Seizing power on April 3, 1941, the nationalist government of Iraqi Prime Minister Rashid Ali repudiated the Anglo-Iraqi Treaty of 1930 and demanded that Britain close its military bases within the country. Ali sought support from Germany, Italy and Vichy France in expelling British forces from Iraq.
Hostilities between the Iraqi and British forces opened on April 18, 1941 with heavy fighting at the British air base at Lake Habbaniya. Iraq's Axis allies dispatched two air squadrons, one from the German Luftwaffe and the other from the Royal Italian Air Force. The Germans and Italians utilized Vichy French bases in Syria, precipitating fighting between British and French forces in Syria.
In early May 1941, Mohammad Amin al-Husayni, the Mufti of Jerusalem and an ally of Ali, declared "holy war" against the United Kingdom and called on Arabs throughout the Middle East to rise up against Britain. On May 25, 1941, Hitler issued his Order 30, stepping up German offensive operations: "The Arab Freedom Movement in the Middle East is our natural ally against England. In this connection special importance is attached to the liberation of Iraq... I have therefore decided to move forward in the Middle East by supporting Iraq."
Hitler dispatched German air and armored forces to Libya and formed the Deutsches Afrikakorps to coordinate a combined German-Italian offensive against the British in Egypt, Palestine and Iraq.
Iraqi military resistance ended by May 31, 1941. Rashi Ali and his ally, the Mufti of Jerusalem, fled to Persia, then to Turkey, Italy and finally Germany where Ali was welcomed by Hitler as head of the Iraqi government-in-exile.
In propaganda broadcasts from Berlin, the Mufti continued to call on Arabs to rise up against the United Kingdom and aid German and Italian forces. He also recruited Moslem volunteers in the Balkans for the Waffen SS.
Japanese puppet statesEdit
Japan created a number of puppet states in the areas occupied by its military, beginning with the creation of Manchukuo in 1932. These puppet states achieved varying degrees of international recognition.
Manchukuo was a Japanese puppet state in Manchuria, the northeast region of China. It was nominally ruled by Puyi, the last emperor of the Qing Dynasty, but in fact controlled by the Japanese military, in particular the Kwantung Army. While Manchukuo ostensibly meant a state for ethnic manchus, the region had a Han Chinese majority.
Following the Japanese invasion of Manchuria in 1931, the independence of Manchukuo was proclaimed on February 18, 1932 with Puyi as "Head of State." He was proclaimed Emperor of Manchukuo a year later. Twenty three of the League of Nations's eighty members recognised the new Manchu nation, but the League itself declared in 1934 that Manchuria lawfully remained a part of China, precipitating Japanese withdrawal from the League. Germany, Italy, and the Soviet Union were among the major powers recognising Manchukuo. The county was also recognised by Costa Rica, El Salvador, and the Vatican. Manchukuo was also recognised by the other Japanese allies and puppet states, including Mengjiang, the Burmese government of Ba Maw, Thailand, the Wang Chingwei regime, and the Indian government of Subhas Chandra Bose.
The armed forces of Manchukuo numbered between 200,000 and 220,000 men, according to the Soviet intelligence estimates. The Manchukuo Army garrisoned Manchukuo under the command of the Japanese Army. The Manchukuo Navy, including river patrol and coastal defense, were under the direct command of the Japanese Third Fleet. The Manchukuo Imperial Guard, numbering 200 men, was under the direct command of the Emperor and served as his bodyguard.
Mengjiang (Inner Mongolia)Edit
Mengjiang (alternatively spelled Mengchiang) was a Japanese puppet state in Inner Mongolia. It was nominally ruled by Prince Demchugdongrub, a Mongol nobleman descended from Ghengis Khan, but was in fact controlled by the Japanese military. Mengjiang's independence was proclaimed on February 18, 1936 following the Japanese occupation of the region.
The Inner Mongolians had several grievances against the central Chinese government in Nanking, with the most important one being the policy of allowing unlimited migration of Han Chinese to this vast region of open plains and desert. Several of the young princes of Inner Mongolia began to agitate for greater freedom from the central government, and it was through these men that Japanese saw their best chance of exploiting Pan-Mongol nationalism and eventually seizing control of Outer Mongolia from the Soviet Union.
Japan created Mengjiang to exploit tensions between ethnic Mongolians and the central government of China which in theory ruled Inner Mongolia. The Japanese hoped to use pan-Mongolism to create a Mongolian ally in Asia and eventually conquer all of Mongolia from the Soviet Union.
When the various puppet governments of China were unified under the Wang Chingwei government in March 1940, Mengjiang retained its separate identity as an autonomous federation. Although under the firm control of the Japanese Imperial Army which occupied its territory, Prince Demchugdongrub had his own army that was, in theory, independent.
Mengjiang vanished in 1945 following Japan's defeat ending World War II and the invasion of Soviet and Red Mongol Armies. As the huge Soviet forces advanced into Inner Mongolia, they met limited resistance from small detachments of Mongolian cavalry, which, like the rest of the army, were quickly brushed aside.
Republic of China (Nanjing puppet regime)Edit
A short-lived state was founded on March 29, 1940 by Wang Jingwei, who became Head of State of this Japanese supported collaborationist government based in Nanking. The government was to be run along the same lines as the Nationalist regime.
During the Second Sino-Japanese War, Japan advanced from its bases in Manchuria to occupy much of East and Central China. Several Japanese puppet states were organised in areas occupied by the Japanese Army, including the Provisional Government of the Republic of China at Peking which was formed in 1937 and the Reformed Government of the Republic of China at Nanking which was formed in 1938. These governments were merged into the Reorganised Government of the Republic of China at Nanking in 1940. The government was to be run along the same lines as the Nationalist regime.
The Nanking Government had no real power, and its main role was to act as a propaganda tool for the Japanese. The Nanking Government concluded agreements with Japan and Manchukuo, authorising Japanese occupation of China and recognising the independence of Manchukuo under Japanese protection. The Nanking Government signed the Anti-Comintern Pact of 1941 and declared war on the United States and Great Britain on January 9, 1943.
The government had a strained relationship with the Japanese from the beginning. Wang's insistence on his regime being the true Nationalist government of China and in replicating all the symbols of the Kuomintang (KMT) led to frequent conflicts with the Japanese, the most prominent being the issue of the regime's flag, which was identical to that of the Republic of China.
The worsening situation for Japan from 1943 onwards meant that the Nanking Army was given a more substantial role in the defence of occupied China than the Japanese had initially envisaged. The army was almost continuously employed against the communist New Fourth Army.
Wang Jingwei died in a Tokyo clinic on November 10, 1944, and was succeeded by his deputy Chen Gongbo. Chen had little influence and the real power behind the regime was Zhou Fohai, the mayor of Shanghai. Wang's death dispelled what little legitimacy the regime had. The state stuttered on for another year and continued the display and show of a fascist regime.
On September 9, 1945, following the defeat of Japan in World War II, the area was surrendered to General He Yingqin, a Nationalist General loyal to Chiang Kai-shek. The Nanking Army generals quickly declared their alliance to the Generalissimo, and were subsequently ordered to resist Communist attempts to fill the vacuum left by the Japanese surrender. Chen Gongbo was tried and executed in 1946.
Burma (Ba Maw regime)Edit
Burmese nationalist leader Ba Maw formed a Japanese puppet state in Burma on August 1, 1942 after the Japanese Army seized control of the nation from the United Kingdom. The Ba Maw regime organised the Burma Defence Army (later renamed the Burma National Army), which was commanded by Aung San.
Philippines (Second Republic)Edit
Jose P. Laurel was the President of the Second Republic of the Philippines, a Japanese puppet state organised on the Philippine Islands in 1942. In 1943, the Philippine National Assembly declared the Philippines an independent republic and elected Laurel as President. The Second Republic ended in with the Japanese surrender. Laurel was arrested and charged with treason by the US government, but was granted amnesty and continued playing politics, ultimately winning a seat in the Philippine Senate.
India (Provisional Government of Free India)Edit
The Provisional Government of Free India was a shadow government led by Subhas Chandra Bose, an Indian nationalist who rejected Gandhi's nonviolent methods for achieving independence. It operated only in those parts of India which came under Japanese control.
A former president of the India National Congress, Bose was arrested by Indian authorities at the outset of the Second World War. In January 1941 he escaped from house arrest and eventually reached Germany and then to Japan where he formed the Indian National Army, mostly from Indian prisoners of war.
Bose and A.M.Sahay, another local leader, received ideological support from wikipedia:Mitsuru Toyama, chief of the Dark Ocean Society along with Japanese Army advisers. Other Indian thinkers in favour of the Axis cause were Asit Krishna Mukherji, a friend of Bose and husband of Savitri Devi Mukherji, one of the women thinkers in support of the German cause, and the Pandit Rajwade of Poona. Bose was helped by Rash Behari Bose, founder of the Indian Independence League in Japan. Bose declared India's independence on October 21 1943. The Japanese Army assigned to the Indian National Army a number of military advisors, among them Hideo Iwakuro and wikipedia:Major-General Isoda.
With its provisional capital at Port Blair on the Andaman and Nicobar Islands after they fell to the Japanese, the state would last two more years until August 18, 1945 when it officially became defunct. In its existence it received recognition from nine governments: Germany, Japan, Italy, Croatia, Manchukuo, China (under the Nanking Government of Wang Chingwei), Thailand, Burma (under the regime of Burmese nationalist leader Ba Maw), and the Philippines under de facto (and later de jure) president José Laurel.
The Indian National Army saw plenty of action (as did their Burmese equivalent). The highlight of the force's campaign in Burma was the planting of the Indian national flag by the 'Bose Battalion' during the battle of Frontier Hill in 1944, although it was Japanese troops from the 55th Cavalry, 1/29th Infantry and 2/143rd Infantry who did most of the fighting. This battle also had the curious incidence of three Sikh companies of the Bose Battalion exchanging insults and fire with two Sikh companies of the 7/16th Punjab Regiment (British Indian Army).
The Indian National Army was encountered again during the Second Arakan Campaign, where they deserted in large numbers back to their old 'imperial oppressors' and again during the crossing of the Irrawaddy in 1945, where a couple of companies put up token resistance before leaving their Japanese comrades to fight off the assault crossing by 7th Indian Division.
Italian puppet statesEdit
Albania was an Italian puppet state, joined in personal union with Italy under the kingship of Victor Emmanuel III, whose full title was King of Italy and Albania, Emperor of Ethiopia. Albania was a constituent of the New Roman Empire envisioned by Italy's fascist dictator, Il Duce Benito Mussolini.
Albania had been in Italian orbit since the First World War when it was occupied by Italy as a "protectorate" in accordance with the London Pact. Italian troops were withdrawn after the war, but throughout the 1920s and 1930s, Albania became increasingly dependent on Italy. The Albanian government and economy were subsidised by Italian loans, the Albanian army was trained by Italian instructors, and Italian settlement was encouraged.
With the major powers of Europe distracted by Germany's occupation of Czechoslovakia, Mussolini sent an ultimatium to the Albanian King Zog on March 25, 1939, demanding that Zog permit the country to be occupied by Italy as a protectorate. On April 7, 1939, Italian troops landed in Albania. Zog, his wife and newborn son immediately fled the country. Five days after the invasion, on April 12, the Albanian parliament voted to depose Zog and join the nation to Italy "in personal union" by offering the Albanian crown to Victor Emmanuel III. The parliament elected Albania's largest landowner, Shefqet Bey Verlaci, as Prime Minister. Verlaci additionally served as head of state for five days until Victor Emmanuel III formally accepted the Albanian crown in a ceremony at the Quirinale place in Rome. Victor Emmanuel III appointed Francesco Jacomoni di San Savino as Lieutenant-General to represent him in Albania as viceroy.
On April 15, 1939, Albania withdrew from the League of Nations, which Italy had abandoned in 1937. On June 3, 1939, the Albanian foreign ministry was merged into the Italian foreign ministry, and the Albanian Foreign Minister, Xhemil Bej Dino, was given the rank of an Italian ambassador.
Albania followed Italy into war with Britain and France on June 10, 1940. Albania served as the base for the Italian invasion of Greece in 1941, and Albanian troops participated in the Greek campaign. Albania was enlarged by the annexation of Montenegro from the former Yugoslavia in 1941. Victor Emmanuel III as "King of Albania" declared war on the Soviet Union in 1941 and the United States in 1942. Some Albanian volunteers served in the SS Skanderberg Division.
Victor Emmanuel III abdicated as King of Albania in 1943 when Italy left the Axis to join the Allies as a co-belligerent against Germany. Nevertheless, Albania had a great partisan movement which fiercely resisted the Fascist and Nazi regime, as a result Albania was the state that alone managed to liberate itself from the German Nazis.
Ethiopia was an Italian puppet state from its conquest in 1936 when Mussolini proclaimed King Victor Emmanuel III the Emperor of Ethiopia (Keasare Ityopia). Ethiopia was consolidated with the Italian colonies of Eritrea and Somalialand to form the new state of Italian East Africa (Africa Orientale Italiana), which was ruled by an Italian viceroy in the name of the King and Emperor. At the beginning of the Second World War, Italian East Africa was garrisoned by 91,000 Italian troops as well as 200,000 native Askari. Italian General Guglielmo Ciro Nasi led these forces in the conquest of British Somaliland in 1940; however, by 1941, the Italians had lost control of East Africa.
German puppet statesEdit
Italy (Salò regime)Edit
Mussolini had been removed from and office and arrested by King Victor Emmanuel III on July 25, 1943. The King publicly reaffirmed his loyalty to Germany but authorized secret armistice negotiations with the Allies. In a spectacular raid led by German paratrooper Otto Skorzeny, Mussolini was rescued from arrest.
Once safely escounced in German occupied Salò, Mussolini declared that the King was deposed, that Italy was a republic and that he was the new president. He functioned as a German puppet for the duration of the war.
Serbia (Nedić regime)Edit
Serbian General Milan Nedić formed the National Government of Salvation in German-occupied Serbia on September 1, 1941. Nedić served as prime minister of the puppet government which recognized the former Yugoslav regent, Prince Paul, as head of state.
Nedić's armed forces, the Serbian State Guards and Serbian Volunteer Corps, wore the uniform of the Royal Yugoslav Army. Nedić's forces fought with the Germans against the Yugoslav Partisans. Unlike Hitler's Nordic collaborators who sent troops to fight the Soviet Union, Nedić's Slavic troops were confined to duty in Serbia.
Montenegro (Drljević regime)Edit
The leader of the Montenegrin Federalists, Sekule Drljević formed the Provisional Administrative Committee of Montenegro on July 12, 1941. The Committee originally tried to collaborate with the Italians.
Drljević's Montenegrin Federalists fought a confusing civil war alongside Axis forces against Yugoslav Partisans and Chetniks.
In October 1941, Drljević was exiled from Montenegro and in 1944, he formed the Montenegrin State Council locates in the Independent State of Croatia. It acted as the Federalists' government in exile.
Axis collaborator statesEdit
France (Vichy regime)Edit
Pétain became the last Prime Minister of the French Third Republic on June 16, 1940 as French resistance to the German invasion of the country was collapsing. Pétain immediately sued for peace with Germany and six days later, on June 22, 1940, his government concluded an armistice with Hitler. Under the terms of the agreement, Germany occupied approximately two thirds of France, including Paris. Pétain was permitted to keep an army of 100,000 men to defend the unoccupied zone. This number includes neither the army based in French colonial empire nor the French fleet. In French North-Africa, a strength of 127,000 men was allowed after the rallying of Gabon to the Free French.
Relations between France and the United Kingdom quickly deteriorated. Fearful that the powerful French fleet might fall into German hands, the United Kingdom launched several naval attacks, the major one against the Algerian harbour of Mers el-Kebir on July 3, 1940. Though Churchill would defend his controversial decisions to attack the French Fleet and, later, invade French Syria, the French people themselves were less accepting of these decisions. German propaganda was able to trumpet these actions as an absolute betrayal of the French people by their former allies. France broke relations with the UK after the attack and considered declaring war.
On July 10, 1940, Pétain was given emergency powers by a vote of the French National Assembly, effectively creating the Vichy regime, for the resort town of Vichy where Petain chose to maintain his seat of government. The new government continued to be recognised as the lawful government of France by the United States until 1942. Racial laws were introduced in France and its colonies and many French Jews were deported to Germany.
The UK permitted French General Charles de Gaulle to headquarter his Free French movement in London in a largely unsuccessful effort to win over the French colonial empire. On September 26, 1940, de Gaulle led an attack by Allied forces on the Vichy port of Dakar in French West Africa. Forces loyal to Pétain fired on de Gaulle and repulsed the attack after two days of heavy fighting. Public opinion in France was further outraged, and Vichy France drew closer to Germany.
Allied forces attacked Syria and Lebanon in 1941, after the Vichy government in Syria allowed Germany to support an Iraqi revolt against the British. In 1942, Allied forces also attacked the Vichy French colony of Madagascar.
Vichy France did not become directly involved in the war on the Eastrn Front. Almost 7,000 volunteers joined the anti-communist Légion des Volontaires Français (LVF) from 1941 to 1944 and some 7500 formed the Division Charlemagne, a Waffen-SS unit, from 1944 to 1945. Both the LVF and the Division Charlemagne fought on the eastern front. Hitler never accepted that France could become a full military partner, and constantly prevented the buildup of Vichy's military strength.
Other than political, Vichy's collaboration with Germany essentially was industrial, with French factories providing many vehicles to the German armed forces.
In November 1942, Vichy French troops briefly but fiercely resisted the landing of Allied troops in French North Africa, but were unable to prevail. Admiral François Darlan negotiated a local ceasefire with the Allies. In response to the landings, and Vichy's inability to defend itself, German troops occupied southern France and the Vichy colony of Tunisia. Although French troops initially did not resist the German invasion of Tunisia, they eventually sided the Allies, and took part in the Tunisia Campaign.
In mid-1943, former Vichy authorities in North Africa came to an agreement with the Free French and setup a temporary French government in Algiers, known as the Comité Français de Libération Nationale, De Gaulle eventually emerging as the leader. The CFLN raised new troops, and re-organized, re-trained and re-equipped the French military under Allied supervision.
However, the Vichy government continued to function in mainland France until late 1944, but had lost most of its territorial sovereignty and military assets, with the exception of Forces stationed in Indochina.
Cases of controversial relations with AxisEdit
The case of DenmarkEdit
On May 31, 1939, Denmark and Germany signed a treaty of non-aggression, which did not contain any military obligations for either party. On April 9, 1940, citing intended British mining of Norwegian and Danish waters as a pretext, Germany occupied both countries. King Christian X and the Danish government, worried about German bombings if they resisted occupation, accepted "protection by the Reich" in exchange for nominal independence under German military occupation. Three successive Prime Ministers, Thorvald Stauning, Vilhelm Buhl and Erik Scavenius, maintained this samarbejdspolitik ("cooperation policy") of collaborating with Germany.
- Denmark coordinated its foreign policy with Germany, extending diplomatic recognition to Axis collaborator and puppet regimes and breaking diplomatic relations with the "governments-in-exile" formed by countries occupied by Germany. Denmark broke diplomatic relations with the Soviet Union and signed the Anti-Comintern Pact of 1941.
- In 1941, a Danish military corps, Frikorps Danmark was created at the initiative of the SS and the Danish Nazi Party, to fight alongside the Wehrmacht on Germany's Eastern Front. The government's following statement was widely interpreted as a sanctioning of the corps. Frikorps Danmark was open to members of the Danish Royal Army and those who had completed their service within the last ten years. Between 4,000 and 10,000 Danes joined the Frikorps Danmark, including 77 officers of the Royal Danish Army. An estimated 3,900 Danes died fighting for Germany during the Second World War.
- Denmark transferred six torpedo boats to Germany in 1941, although the bulk of its navy remain under Danish command until the declaration of martial law in 1943.
- Denmark supplied agricultural and industrial products to Germany as well as loans for armaments and fortifications. Denmark's central bank, Nationalbanken, financed Germany's construction of the Danish part of the Atlantic Wall fortifications at a cost of 5 billion kroner.
The Danish protectorate government lasted until August 29, 1943, when the cabinet resigned following a declaration of martial law by occupying German military officials. The Danish navy managed to scuttle several ships to prevent their use by Germany although most were seized by the Germans. Danish collaboration continued on an administrative level, with the Danish bureacracy functioning under German command.
Active resistance to the German occupation among the populace, virtually nonexistent before 1943, increased after the declaration of martial law. The intelligence operations of the Danish resistance was described as "second to none" by Field Marshal Bernard Law Montgomery after the liberation of Denmark.
The case of the Soviet UnionEdit
Relations between the Soviet Union and the major Axis powers were generally hostile before 1939. In the Spanish Civil War, the Soviet Union gave military aid to the Second Spanish Republic, against Spanish Nationalist forces, which were assisted by Germany and Italy. However the Nationalist forces were victorious. In 1938 and 1939, the USSR fought and defeated Japan in two separate border wars, at Lake Khasan and Khalkhin Gol. The Soviets suffered another political defeat when an ally, Czechoslovakia, was partitioned and partially annexed, by Germany, Hungary and Poland — with the agreement of Britain and France — in 1938-39.
There were talks between Soviet Union and United Kingdom and France for an alliance against the growing power of Germany but these talks failed. As a result, on August 23, 1939, the Soviet Union and Germany signed the Molotov-Ribbentrop Pact, which included a secret protocol whereby the independent countries of Finland, Estonia, Latvia, Lithuania, Poland and Romania were divided into spheres of interest of the parties.
Soon after that, the Soviet Union occupied Estonia, Latvia and Lithuania, in addition, it annexed Bessarabia and Northern Bukovina from Romania. The Soviet Union attacked Finland on November 30, 1939 which started the Winter War. Finnish defence prevented an all-out invasion, but Finland was forced to cede strategically important border areas near Leningrad.
The Soviet Union supported Germany in the war effort against Western Europe through the German-Soviet Commercial Agreement with supplies of raw materials (phosphates, chrome and iron ore, mineral oil, grain, cotton, rubber). These and other supplies were being transported through Soviet and occupied Polish territories and allowed Germany to circumvent the British naval blockade. Germany ended the Molotov-Ribbentrop Pact by invading the Soviet Union in Operation Barbarossa on June 22, 1941. That resulted in the Soviet Union becoming one of the main members of Allies.
Germany then revived its Anti-Comintern Pact enlisting many European and Asian countries in opposition to the Soviet Union.
The Soviet Union and Japan remained neutral towards each other for most of the war by Soviet-Japanese Neutrality Pact. The Soviet Union ended the Soviet-Japanese Neutrality Pact by invading Manchukuo in Operation August Storm on August 8, 1945.
The cases of Spain and PortugalEdit
Together, Generalísimo Francisco Franco's Spanish State and Salazar's Portugal gave considerable moral, economic, and military assistance to the Axis Powers while nominally maintaining its neutrality. Franco described Spain as a "nonbelligerent" member of the Axis and signed the Anti-Comintern Pact of 1941 with Hitler and Mussolini. The Portuguese position was more ambivalent; although Salazar was personally sympathetic to the Axis, Portugal and the United Kingdom were bound by the world's oldest defence treaty, the Treaty of Windsor.
Franco, who shared the fascist ideology of Hitler and Mussolini, had won the Spanish Civil War with the help Germany and Italy. Spain owed Germany over $212 million for supplies of matériel during the Spanish Civil War, and Italian combat troops had actually fought in Spain on the side of Franco's Nationalists. During the War, Salazar had been active in aiding the Nationalist factions, providing troops, equipment, and even executing Loyalists attempting to flee during the final collapse of resistance.
When Germany invaded the Soviet Union in 1941, Franco immediately offered to form a unit of military volunteers to fight against the Bolsheviks. This was accepted by Hitler and, within two weeks, there were more than enough volunteers to form a division - the Blue Division (División Azul in Spanish) under General Agustín Muñoz Grandes.
Additionally, over 100,000 Spanish civilian workers were sent to Germany to help maintain industrial production to free up able bodied German men for military service, and Portugal implemented similar but smaller scale measures.
With Spain and Portugal's co-operation, the Abwehr, the German intelligence organisation, operated in Spain and Portugal themselves, and even in their African colonies, such as Spanish Morocco and Portuguese East Africa.
Relations between Portugal and the Axis deteriorated somewhat after Japanese incursions into Portugal's Asian colonies: the domination of Macau, from late 1941 onwards, and the killing of more than 40,000 civilians in the Japanese response to an Allied guerilla campaign in Portuguese Timor, during 1942-43.
In early 1944, when it became apparent that the Allies had gained the advantage over Germany, the Spanish government declared its "strict neutrality" and the Abwehr operation in southern Spain was consequently closed down. Portugal had done the same even earlier.
During the war, Franco's Spain was an escape route for several thousands of mainly Western European Jews fleeing occupied France to evade deportation to concentration camps. Likewise, Spain was an escape route for Nazi officials fleeing capture at the end of the war.
- ↑ Seppinen, Ilkka: Suomen ulkomaankaupan ehdot 1939-1940 (Conditions of Finnish foreign trade 1939-1940), 1983, ISBN 951-9254-48-X
- ↑ British Foreign Office Archive, 371/24809/461-556
- ↑ Jokipii, Mauno: Jatkosodan synty (Birth of the Continuation War), 1987, ISBN 951-1-08799-1
- ↑ Christian Bachelier, L'armée française entre la victoire et la défaite, in La France des années noires, dir. Azéma & Bédarida, Le Seuil, édition 2000, coll. points-histoire,Tome 1, p.98
- ↑ Robert O. Paxton, 1993, "La Collaboration d'État" in La France des Années Noires, Ed. J. P. Azéma & François Bédarida, Éditions du Seuil, Paris
- ↑ http://www.navalhistory.dk/Danish/Historien/1939_1945/IkkeAngrebsPagt.htm (Danish)
- ↑ Trommer, Aage. "Denmark". The Occupation 1940-45. Foreign Ministry of Denmark. Retrieved on 2006-09-20.
- ↑ Lidegaard, Bo (2003). Dansk Udenrigspolitisk Historie, vol. 4. Copenhagen: Gyldendal, 461–463. (Danish)
- ↑ Danish Legion Military and Feldpost History. Retrieved on 2006-09-20.
- ↑ http://befrielsen1945.emu.dk/temaer/befrielsen/jubel/index.html (Danish)
- Gerhard L. Weinberg. A World at Arms: A Global History of World War II.(NY: Cambridge University Press, 2nd edition, 2005) provides a scholarly overview.
- I. C. B. Dear and M. R. D. Foot, eds. The Oxford Companion to World War II. (2001) is a reference book with encyclopedic coverage of all military, political and economic topics.
- Kirschbaum, Stanislav (1995) A History of Slovakia: The Struggle for Survival. St. Martin’s Press. ISBN 0-312-10403-0 entails Slovakia's involvement during the World War II.
- World War II
- Allies of World War II
- Participants in World War II
- List of Pro-Axis Leaders and Governments or Direct Control in Occupied Territories
- Expansion plans of the Axis
- Expansion operations and planning of the Axis Powers
Pacts and treaties
- Axis History Factbook
- Full text of The Tripartite Pact
- Full text of The Pact of Steel
- Silent movie of the signing of The Tripartite Pact
|World War II|
|Participants||Theatres||Main events||Specific articles|
Civilian impact and atrocities
|This page uses content from Wikipedia. The original article was at Axis powers of World War II. The list of authors can be seen in the page history. As with WarWiki, the text of Wikipedia is available under CC-BY-SA.| | http://war.wikia.com/wiki/Axis_powers_of_World_War_II | 13 |
53 | Home | Audio | DIY | Guitar | iPods | Music | Brain/Problem Solving | Links| Site Map
This work is licensed under a Creative Commons License.
Simple Guitar Physics
Construction of the Guitar
In order to achieve the specific sounds required for music, guitars have various components that enable them to produce these specialized sounds. The narrow end of the guitar is called the headstock, and is attached to the neck of the guitar. On the headstock there are machine heads, also known as tuning keys, around which the strings are wound. At the point where the headstock meets the neck of the guitar, there is a small piece of material (plastic, bone, etc.) called the nut, in which small grooves are carved in order to guide the strings up to the machine heads. The neck of the guitar runs all the way down the guitar until it meets the body of the guitar at the upper bout, and it contains the fret board of the guitar, containing the frets embedded in it at points along the length of the neck that divide it mathematically. The body of the guitar is a resonating chamber which projects the vibrations of the body through the hole cut on the top of it, called the sound hole. The strings of the guitar run from the machine heads, over the nut, down the neck, body, and the sound hole, and are anchored at a piece of hardware attached to the body of the guitar, called the bridge.
It is these components of the guitar that allow it to produce the specific sounds required to create music. In order to understand music and how guitars produce it, it is first required to understand the physics of sound. Sound is created when a wave motion is set up in the air by the vibration of material bodies. What this means is that when material bodies vibrate, they create a vibrational energy that travels in pressure waves through a medium. All forms of instruments create vibrations in order to produce sound waves that make the music, which is essentially organized sound, and guitars are a type of musical instrument called a string instrument, meaning that they create their sound through the vibrations of a string. On the guitar, the string that vibrates to produce the sound is fixed at both ends, is elastic, and therefore can vibrate . When the guitar string is either strummed or plucked, the string of the guitar begins to vibrate, and since these vibrations are waves, they begin to travel in both directions along the string and are reflected back at each fixed end. These waves will not cancel each other out as they reflect back upon themselves, but instead form a standing wave, which is a situation where crests and troughs remain at fixed positions in the medium while the wave as a whole increases and decreases together. The guitar strings act in such a way that they can satisfy the relationship between wavelength and frequency, represented by the equation v = fλ . This equation can be rearranged to f = v/λ, meaning that the frequency of a wave (f) is dependent on both the speed of the wave (v), and the length of the wave (λ). As well, the speed of the wave traveling on the guitar string depends on the tension of the string (T) and the linear mass density of the string (µ), in fact, “the root frequency for a string is proportional to the square root of the tension, inversely proportional to its length, and inversely proportional to the square root of its linear mass density” . What this means is that waves will travel faster when the tension of the string is higher, which in turn means that the frequency will be higher as the tension is increased (f = v/λ, the v is increasing).
This also means that waves will travel slower on a more massive string, since if the mass is increased, the v will decrease. This relationship between the speed, tension, and mass density can be arranged into a new equation,
When a standing wave vibrates, a combination of reflection and interference occur in such a way that the reflected waves interfere constructively with the incident waves, because the waves have changed phase when they reflected from one of the fixed ends. When this is happening, the medium appears to vibrate in segments, and it is not apparent that the whole wave is traveling. Since a guitar string has two fixed ends, it will act like a standing wave, and therefore when agitated by either being plucked or strummed; the wavelength that the string can produce is twice the length of the string . Since all the strings are the same length, all six strings on the guitar use the same range of wavelengths, however, in order to produce different sound waves required to create music, different amounts of air must be displaced at different frequencies, meaning the guitar strings must be able to vibrate at different frequencies to do so. In order to create different frequencies on the guitar, one of the factors of the equation f = v/λ must be changed, so either the speed, or the length of the wave must be changed. Since the strings on the guitar are attached to the nut and bridge, and when played open have a fixed wavelength, the only other factor that can be changed to produce a different frequency is the speed of the wave, ‘v’. Since the speed of the wave is affected by the tension on the string and the mass density (v = T/µ.), either the tension of the string, or the mass density must be changed in order to create a different frequency. However, if the frequency of the vibration of the guitar string were only changed by varying the tension, then the high strings (needing a higher frequency) would have to be wound tight since the tension required would be fairly high, while the lower strings (needing a lower frequency) would require much less tension, and subsequently be very loose. Since it would be very difficult to play a guitar where the high strings are tight and the low strings loose, guitars are constructed in such a way that the tension of the strings should be equal. Since the only other factor that can be changed while playing all the strings open is the mass density, guitars are constructed so that the tension of the strings, as well as the mass density are increased together. As a result, guitar strings are made so that the higher the frequency required from the open string, the less mass density the string will have, since higher frequencies require a higher tension, and the less mass they have the less tension is needed to achieve the same frequency. Subsequently, the lower the frequency of a string required is, the higher the mass density is, since a lower tension produces lower frequencies, and the more mass the strings contain, the more tension is required. Since in standard tuning the strings on the guitar are a perfect fourth apart on pitch (frequency), except between G and B, the amount that the mass density must be increased so that the tension remains constant can be calculated.
Frets and Intonation
However, music is complex, and many frequencies are required in order to create the correct sound waves that will produce the music. This poses a problem, because although the 6 strings of the guitar are set up in a playing-friendly manner, at this point each individual string can only produce one frequency, and since no different part of the equation to f = v/λ is being changed when an open string is played, which is not nearly enough variation to produce complex music. Therefore, one part of the equation to f = v/λ must be changed while playing a guitar in order to produce a different frequency. However, the speed of the wave cannot be changed, since the two factors (v = T/µ.), the tension of the string and the mass density are not changed significantly enough while playing to affect the speed of the wave enough to change the frequency. As a result, on the neck of the guitar there are little strips of metal called frets, whose function is to decrease the length of the string, which will cause a higher frequency. When a string is pressed down near a fret, the resonant length of the string is decreased, as it no longer stretches from the bridge to the nut but from the bridge to the fret where the string is being held down. This decreases the length of the wave (λ) through decreasing the length of the medium (string), which consequently increases the frequency of the string. Thus, on every string, the guitar player has an option of decreasing the length of the string in about 24 different ways, which will produce 24 different frequencies on each string. Since a guitar has six strings, and each string can have up to 24 frets, the number of notes available from which to choose is greatly increased. As multiple strings can be played together, the guitarist now has many frequencies from which to choose in order to create music on the instrument.
Frets on the fingerboard serve to fix the positions of notes and scales, which gives them equal temperament. Consequently, the ratio of the widths of two consecutive frets is the twelfth root of two , whose numeric value is about 1.059. The twelfth fret divides the string in two exact halves and the 24th fret (if present) divides the string in half yet again. Every twelve frets represents one octave. The position of the bridge saddles, upon which the strings rest, determines the distance to the nut (at the top of the fingerboard). This distance defines the positions of the harmonic nodes for the strings over the fretboard, and is the basis of intonation. Intonation refers to the property that the actual frequency of each string at each fret matches what those frequencies should be according to music theory. Because of the physical limitations of fretted instruments, intonation is at best approximate; thus, the guitar's intonation is said to be tempered. The twelfth, or octave, fret resides directly under the first harmonic node (half-length of the string), and in the tempered fretboard, the ratio of distances between consecutive frets is approximately 1.06, as derived above.
However if a guitar string had only one single frequency that it vibrated on, the guitar would sound quite boring, and there would not be much difference between the guitar and other stringed instruments. Guitars sound different from other stringed instruments because of the different overtones, or harmonics dominant on a guitar. When a guitar string is either strummed or plucked, the string begins to vibrate, and these vibrations are in the form of waves. However, the waves that are created by the vibrations of the string travel in both directions along the string, and continue forward until they are reflected off the fixed ends. When the waves are reflected, they change direction, and travel back the other way through the medium (the string). When the waves are traveling back through the string, they cause interference with the other waves traveling the string that were also caused by the vibration . The standing wave pattern is formed when there is perfectly timed interference of two waves passing through the same medium, to create a situation where the crests and troughs remain at fixed positions. On a guitar string, the waves that are reflected and are traveling in the opposite direction of the other waves on the string create a standing wave. Because of the interfering vibrations on a guitar string, standing wave patterns are created, meaning that there are some points along the string that appear to be standing still, and these points of no displacement are referred to as nodes. As well, there are other points along the medium that undergo vibrations between a large positive and large negative displacement, and are the points that undergo the maximum displacement during each vibrational cycle of the standing wave and are called antinodes. On the guitar string, a number of different patterns of standing waves may be produced, and each pattern will have different number of nodes and antinodes. Standing wave patterns can only be produced within the string of the guitar when it is vibrated at certain frequencies, however there are several frequencies with which the string can be vibrated to produce the different patterns of standing waves, each with a different number of nodes and antinodes. Every different frequency is associated with a different standing wave pattern, and they are referred to as harmonics. The most simple pattern of standing wave that can be produced is one at which the two nodes are at the fixed ends, which is the longest wavelength, and it is called the first harmonic, or fundamental harmonic. Since on a guitar string the waves keep on being reflected off the fixed ends and causing interference with each other, there are many different frequencies, but with any medium fixed at both ends, only certain sized waves can stand. This means that on a guitar string, only certain types of frequencies can stand, so we say that such a medium is tuned. Therefore, the strings on the guitar are tuned in such a way that the second pattern of the standing wave, or second harmonic, can only have half the wavelength and twice the frequency of the first harmonic. The second harmonic is also referred to as the first overtone, and it is these multiple overtones that we hear from the guitar string that make the guitar sound different from other instruments. Similarly, the third harmonic, or the third pattern possibility for the standing wave on a guitar, has one third the wavelength and three times the frequency when compared to the first harmonic and is called the second overtone. The rest of the harmonics follow the same pattern that the nth harmonic has 1/n wavelength and n times the frequency. It is the fundamental frequency (first harmonic) that determines the note that we hear, and the higher harmonics determine the timbre. This means that the simplest standing wave pattern on the guitar string containing only two nodes and two antinodes, determines what musical note we hear, while the more complex standing wave patterns, the other harmonics, determine how that note sounds.
Sound is created when material vibrations cause changes in air pressure and create pressure waves. However, guitar strings are not large enough to move large enough amounts of air to create a sound loud enough to be easily heard by the human ear. Therefore, the body of an acoustic guitar is used to amplify the sounds the strings produce, and the body of the guitar is made up of different components that allow it to do so. The body of the guitar is basically a larger hollow space that is specially constructed to amplify the sound of the strings. The top plate of the body, the piece of wood located on the front of the body of the guitar, is constructed so that it can vibrate up and down relatively easily, and is usually made of light, springy wood, about 2.5 mm thick. Inside the actual body of the guitar there are series of braces that strengthen the plate and the keep the plate flat, despite the movement of the strings that will tend to make the bridge move, since it is attached to the top plate. On the opposite side of the guitar, there is the back plate that does not play as big a role in amplifying the sound, since it is held against the player's body and cannot vibrate much. The sides of the guitar also do not vibrate much in the direction perpendicular to their surface, so they also don’t radiate much sound.
When the strings are plucked or strummed, they begin to vibrate, and these vibrations in the form of waves are transmitted to the bridge of the guitar. Since the bridge is attached the top plate of the guitar, the top plate also begins to vibrate as a result of the vibrations of the string, via the bridge. If the string is vibrating at a high frequency, and subsequently the bridge is vibrating at a high frequency, most of the sound is radiated by the vibrations of the top plate. Since the top plate has a much larger surface area than the string, when the top plate vibrates as a result of the vibrations of the string, the volume of air the top plate is displacing is much larger than that of the string. Therefore, the pressure waves being produced by the top plate will be bigger, and the sound will be louder. For lower frequencies, the strings vibrations are transmitted via the bridge to the top plate, where it is then transmitted to the back plate, then reflected through the sound hole, which is constantly increasing the volume of the pressure waves being produced. In fact, it is not the vibrations of the guitar string that we hear when listening to a guitar, rather the amplification of the vibrations it produces through the body of the guitar.
Home | Audio | DIY | Guitar | iPods | Music | Links | Brain and Problem Solving | Site Map | Contact | http://www.co-bw.com/Guitar_physics_of.htm | 13 |
95 | Archimedes (Greek: Ἀρχιμήδης) (c. 287 B.C.E. –212 B.C.E.) was an ancient Greek mathematician, physicist, engineer, astronomer, and philosopher, considered one of the greatest mathematicians in antiquity. Archimedes apparently studied mathematics in Alexandria, but lived most of his life in Syracuse. He discovered how to find the volume of a sphere and determined the value of Pi; developed a way of counting using zeros to represent powers of ten; discovered a formula to find the area under a curve and the amount of space enclosed by a curve; and may have been the first to use integral calculus. Archimedes also invented the field of statics, enunciated the law of the lever, the law of equilibrium of fluids, and the law of buoyancy. He was the first to identify the concept of center of gravity, and he found the centers of gravity of various geometric figures, including triangles, paraboloids, and hemispheres, assuming the uniform density of their interiors. Using only ancient Greek geometry, he also gave the equilibrium positions of floating sections of paraboloids as a function of their height, a feat that would be challenging for a modern physicist using calculus.
Archimedes only became widely known as a mathematician after Eutocius brought out editions of some of his works, with commentaries, in the sixth century C.E. Ancient writers were more interested in his inventions and in the ingenious war machines which he developed than in his achievements in mathematics. Plutarch recounts how Archimedes’ war machines defended Syracuse against Roman attackers during the Second Punic War. Many of Archimedes’ works were lost when the Library of Alexandria was burnt (twice), and survived only in Latin or Arabic translations.
Archimedes was born in the seaport colony of Syracuse, Magna Graecia (now Sicily), around 287 B.C.E. He studied in Alexandria and then returned to Syracuse, where he spent the rest of his life. Much of what is known about Archimedes comes from the prefaces to his works and from stories related by Plutarch, Livy and other ancient historiographers. The preface to The Sand Reckoner tells us that Archimedes’ father, Phidias, was an astronomer. In the preface to On Spirals, Archimedes relates that he often sent his friends in Alexandria statements of his latest theorems, but without giving proofs. Some of the mathematicians there had claimed his results as their own, so Archimedes says that on the last occasion when he sent them theorems he included two which were false, “… so that those who claim to discover everything, but produce no proofs of the same, may be confuted as having pretended to discover the impossible.” He regarded Conon of Samos, one of the mathematicians at Alexandria, as a close friend and admired him for his abilities as a mathematician.
The dedication of The Sand Reckoner to Gelon, the son of King Hieron, is evidence that Archimedes was close to the family of King Hieron II. Plutarch’s biography of a Roman soldier, Marcellus, who captured Syracuse in 212 B.C.E., also tells us that Archimedes was related to King Hieron II of Syracuse. The same biography contends that Archimedes, possessing a lofty spirit and profound soul, refused to write any treatise on engineering or mechanics but preferred to devote himself to the study of pure geometry and pursued it without regard for food or personal hygiene.
And yet Archimedes possessed such a lofty spirit, so profound a soul, and such a wealth of scientific theory, that although his inventions had won for him a name and fame for superhuman sagacity, 4 he would not consent to leave behind him any treatise on this subject, but regarding the work of an engineer and every art that ministers to the needs of life as ignoble and vulgar, he devoted his earnest efforts only to those studies the subtlety and charm of which are not affected by the claims of necessity. These studies, he thought, are not to be compared with any others; in them the subject matter vies with the demonstration, the former supplying grandeur and beauty, the latter precision and surpassing power. 5 For it is not possible to find in geometry more profound and difficult questions treated in simpler and purer terms. Some attribute this success to his natural endowments; others think it due to excessive labour that everything he did seemed to have been performed without labour and with ease. For no one could by his own efforts discover the proof, and yet as soon as he learns it from him, he thinks he might have discovered it himself; so smooth and rapid is the path by which he leads one to the desired conclusion. 6 And therefore we may not disbelieve the stories told about him, how, under the lasting charm of some familiar and domestic Siren, he forgot even his food and neglected the care of his person; and how, when he was dragged by main force, as he often was, to the place for bathing and anointing his body, he would trace geometrical figures in the ashes, and draw lines with his finger in the oil with which his body was anointed, being possessed by a great delight, and in very truth a captive of the Muses. 7 And although he made many excellent discoveries, he is said to have asked his kinsmen and friends to place over the grave where he should be buried a cylinder enclosing a sphere, with an inscription giving the proportion by which the containing solid exceeds the contained. (Plutarch, Marcellus, 17: 3-7 translated by John Dryden)
Plutarch also gives three accounts of the death of Archimedes at the hands of the Roman soldiers. Although Marcellus ordered that Archimedes not be harmed, Roman soldiers came upon him at work and brutally murdered him. These stories seem designed to contrast the high-mindedness of the Greeks with the blunt insensitivity and brutality of the Roman soldiers.
4 But what most of all afflicted Marcellus was the death of Archimedes. For it chanced that he was by himself, working out some problem with the aid of a diagram, and having fixed his thoughts and his eyes as well upon the matter of his study, he was not aware of the incursion of the Romans or of the capture of the city. Suddenly a soldier came upon him and ordered him to go with him to Marcellus. This Archimedes refused to do until he had worked out his problem and established his demonstration, 5 whereupon the soldier flew into a passion, drew his sword, and dispatched him. Others, however, say that the Roman came upon him with drawn sword threatening to kill him at once, and that Archimedes, when he saw him, earnestly besought him to wait a little while, that he might not leave the result that he was seeking incomplete and without demonstration; but the soldier paid no heed to him and made an end of him. 6 There is also a third story, that as Archimedes was carrying to Marcellus some of his mathematical instruments, such as sun-dials and spheres and quadrants, by means of which he made the magnitude of the sun appreciable to the eye,b some soldiers fell in with him, and thinking that he was carrying gold in the box, slew him. However, it is generally agreed that Marcellus was afflicted at his death, and turned away from his slayer as from a polluted person, and sought out the kindred of Archimedes and paid them honour. (Plutarch, Marcellus, Chapter 19: 4-6, translated by John Dryden)
Thought and Works
Archimedes is considered by most historians of mathematics as one of the greatest mathematicians of all time. In creativity and insight, Archimedes exceeded any other European mathematician prior to the European Renaissance. Archimedes' works were not generally recognized, even in classical antiquity, though individual works were often quoted by three eminent mathematicians of Alexandria, Heron, Pappus and Theon, and only became widely known after Eutocius brought out editions of some of them, with commentaries, in the sixth century C.E. Many of Archimedes’ works were lost when the library of Alexandria was burnt (twice), and survived only in Latin or Arabic translations. The surviving works include On Plane Equilibriums (two books), Quadrature of the Parabola, On the Sphere and Cylinder (two books), On Spirals, On Conoids and Spheroids, On Floating Bodies (two books), Measurement of a Circle, and The Sand Reckoner. In the summer of 1906, J. L. Heiberg, professor of classical philology at the University of Copenhagen, discovered a tenth century manuscript which included Archimedes' work The Method, which provides a remarkable insight into how Archimedes made many of his discoveries.
Numerous references to Archimedes in the works of ancient writers are concerned more with Archimedes’ inventions, particularly those machines which were used as engines of war, than with his discoveries in mathematics.
King Hiero II, who was rumored to be Archimedes' uncle, commissioned him to design and fabricate a new class of ships for his navy. Hiero II had promised large caches of grain to the Romans in the north in return for peace. Unable to deliver the promised amount, Hiero II commissioned Archimedes to develop a large, luxurious supply and war barge for his navy. The ship, coined Saracussia, after its nation, may be mythical. There is no record on foundry art, nor any other period pieces depicting its creation. It is solely substantiated by a description from Plato, who said "it was the grandest equation ever to sail."
It is said that the Archimedes Screw, a device which draws water up, was developed as a tool to remove bilge water from ships. Archimedes became well-known for his involvement in the defense of Syracuse, Italy against the Roman attack during the Second Punic War. In his biography of Marcellus, Plutarch describes how Archimedes held the Romans at bay with war machines of his own design, and was able to move a full-size ship complete with crew and cargo with a compound pulley by pulling a single rope.
7And yet even Archimedes, who was a kinsman and friend of King Hiero, wrote to him that with any given force it was possible to move any given weight; and emboldened, as we are told, by the strength of his demonstration, he declared that, if there were another world, and he could go to it, he could move this. 8 Hiero was astonished, and begged him to put his proposition into execution, and show him some great weight moved by a slight force. Archimedes therefore fixed upon a three-masted merchantman of the royal fleet, which had been dragged ashore by the great labours of many men, and after putting on board many passengers and the customary freight, he seated himself at a distance from her, and without any great effort, but quietly setting in motion with his hand a system of compound pulleys, drew her towards him smoothly and evenly, as though she were gliding through the water. 9 Amazed at this, then, and comprehending the power of his art, the king persuaded Archimedes to prepare for him offensive and defensive engines to be used in every kind of siege warfare. These he had never used himself, because he spent the greater part of his life in freedom from war and amid the festal rites of peace; but at the present time his apparatus stood the Syracusans in good stead, and, with the apparatus, its fabricator. Plutarch, Chapter 14, Marcellus,7-9
Claw of Archimedes
One of his inventions used for military defense of Syracuse against the invading Romans was the “claw of Archimedes.” Archimedes also has been credited with improving accuracy, range and power of the catapult, and with the possible invention of the odometer during the First Punic War.
15 When, therefore, the Romans assaulted them by sea and land, the Syracusans were stricken dumb with terror; they thought that nothing could withstand so furious an onset by such forces. But Archimedes began to ply his engines, and shot against the land forces of the assailants all sorts of missiles and immense masses of stones, which came down with incredible din and speed; nothing whatever could ward off their weight, but they knocked down in heaps those who stood in their way, and threw their ranks into confusion. 2 At the same time huge beams were suddenly projected over the ships from the walls, which sank some of them with great weights plunging down from on high; others were seized at the prow by iron claws, or beaks like the beaks of cranes, drawn straight up into the air, and then plunged stern foremost into the depths, or were turned round and round by means of enginery within the city, and dashed upon the steep cliffs that jutted out beneath the wall of the city, with great destruction of the fighting men on board, who perished in the wrecks. 3 Frequently, too, a ship would be lifted out of the water into mid-air, whirled hither and thither as it hung there, a dreadful spectacle, until its crew had been thrown out and hurled in all directions, when it would fall empty upon the walls, or slip away from the clutch that had held it. As for the engine which Marcellus was bringing up on the bridge of ships, and which was called "sambuca" from some resemblance it had to the musical instrument of that name,25 4 while it was still some distance off in its approach to the wall, a stone of ten talents' weight26 was discharged at it, then a second and a third; some of these, falling upon it with great din and surge of wave, crushed the foundation of the engine, shattered its frame-work, and dislodged it from the platform, so that Marcellus, in perplexity, ordered his ships to sail back as fast as they could, and his land forces to retire. 5Then, in a council of war, it was decided to come up under the walls while it was still night, if they could; for the ropes which Archimedes used in his engines, since they imparted great impetus to the missiles cast, would, they thought, send them flying over their heads, but would be ineffective at close quarters, where there was no place for the cast. Archimedes, however, as it seemed, had long before prepared for such an emergency engines with a range adapted to any interval and missiles of short flight, and through many small and contiguous openings in the wall short-range engines called scorpions could be brought to bear on objects close at hand without being seen by the enemy. When, therefore, the Romans came up under the walls, thinking themselves unnoticed, once more they encountered a great storm of missiles; huge stones came tumbling down upon them almost perpendicularly, and the wall shot out arrows at them from every point; they therefore retired. 2 And here again, when they were some distance off, missiles darted forth and fell upon them as they were going away, and there was great slaughter among them; many of their ships, too, were dashed together, and they could not retaliate in any way upon their foes. For Archimedes had built most of his engines close p479behind the wall, and the Romans seemed to be fighting against the gods, now that countless mischiefs were poured out upon them from an invisible source. 17 However, Marcellus made his escape, and jesting with his own artificers and engineers, "Let us stop," said he, "fighting against this geometrical Briareus, who uses our ships like cups to ladle water from the sea, and has whipped and driven off in disgrace our sambuca, and with the many missiles which he shoots against us all at once, outdoes the hundred-handed monsters of mythology." 2 For in reality all the rest of the Syracusans were but a body for the designs of Archimedes, and his the one soul moving and managing everything; for all other weapons lay idle, and his alone were then employed by the city both in offence and defence. 3 At last the Romans became so fearful that, whenever they saw a bit of rope or a stick of timber projecting a little over the wall, "There it is," they cried, "Archimedes is training some engine upon us," and turned their backs and fled. Seeing this, Marcellus desisted from all fighting and assault, and thenceforth depended on a long siege. (Plutarch, Marcellus, Chapters 15 - 17
It is said that Archimedes prevented one Roman attack on Syracuse by using a large array of mirrors (speculated to have been highly polished shields) to reflect concentrated sunlight onto the attacking ships, causing them to catch fire. This popular legend, dubbed the "Archimedes death ray," has been tested many times since the Renaissance and often discredited. It seems the ships would have had to be virtually motionless and very close to shore for them to ignite, an unlikely scenario during a battle. A group at Massachusetts Institute of Technology have performed their own tests and concluded that the mirror weapon was a possibility. , although later tests of their system showed it to be ineffective in conditions that more closely matched the described siege. The television show Mythbusters also took on the challenge of recreating the weapon and concluded that while it was possible to light a ship on fire, it would have to be stationary at a specified distance during the hottest part of a very bright, hot day, and would require several hundred troops carefully aiming mirrors while under attack. These unlikely conditions combined with the availability of other simpler methods, such as ballistae with flaming bolts, led the team to believe that the heat ray was far too impractical to be used, and probably just a myth.
The story of Archimedes discovering buoyancy while sitting in his bathtub is described in Book 9 of De architectura by Vitruvius. King Hiero had given a goldsmith the exact amount of gold to make a sacred gold wreath. When Hiero received it, the wreath had the correct weight but the monarch suspected that some silver had been used instead of the gold. Since he could not prove it without destroying the wreath, he brought the problem to Archimedes. One day while considering the question, "the wise one" entered his bathtub and recognized that the amount of water that overflowed the tub was proportional to the amount of his body that was submerged. This observation is now known as Archimedes' Principle and gave him the means to measure the mass of the gold wreath. He was so excited that he ran naked through the streets of Syracuse shouting "Eureka! eureka!" (I have found it!). The dishonest goldsmith was brought to justice.
The Law of Buoyancy:
- The buoyant force is equal to the weight of the displaced fluid.
The weight of the displaced fluid is directly proportional to the volume of the displaced fluid (specifically if the surrounding fluid is of uniform density). Thus, among objects with equal masses, the one with greater volume has greater buoyancy.
Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum. Suppose that when the rock is lowered by the string into water, it displaces water of weight 3 newtons. The force it then exerts on the string from which it hangs will be 10 newtons minus the 3 newtons of buoyant force: 10 − 3 = 7 newtons.
The density of the immersed object relative to the density of the fluid is easily calculated without measuring any volumes:
In creativity and insight, Archimedes exceeded any other European mathematician prior to the European Renaissance. In a civilization with an awkward numeral system and a language in which "a myriad" (literally "ten thousand") meant "infinity," he invented a positional numeral system and used it to write numbers up to 1064. He devised a heuristic method based on statistics to do private calculations that would be classified today as integral calculus, but then presented rigorous geometric proofs for his results. To what extent Archimedes’ version of integral calculus was correct is debatable. He proved that the ratio of a circle's circumference to its diameter is the same as the ratio of the circle's area to the square of the radius. He did not call this ratio Pi (π) but he gave a procedure to approximate it to arbitrary accuracy and gave an approximation of it as between 3 + 10/71 (approximately 3.1408) and 3 + 1/7 (approximately 3.1429). He was the first Greek mathematician to introduce mechanical curves (those traced by a moving point) as legitimate objects of study. He proved that the area enclosed by a parabola and a straight line is 4/3 the area of a triangle with equal base and height. (See the illustration below. The "base" is any secant line, not necessarily orthogonal to the parabola's axis; "the same base" means the same "horizontal" component of the length of the base; "horizontal" means orthogonal to the axis. "Height" means the length of the segment parallel to the axis from the vertex to the base. The vertex must be so placed that the two horizontal distances mentioned in the illustration are equal.)
In the process, he calculated the earliest known example of a geometric progression summed to infinity with the ratio 1/4:
If the first term in this series is the area of the triangle in the illustration, then the second is the sum of the areas of two triangles whose bases are the two smaller secant lines in the illustration, and so on. Archimedes also gave a quite different proof of nearly the same proposition by a method using infinitesimals (see "Archimedes' use of infinitesimals").
He proved that the ratio of the area of a sphere to the area of a circumscribed straight cylinder is the same as the ratio of the volume of the sphere to the volume of the circumscribed straight cylinder, an accomplishment which he had inscribed as his epitaph on his tombstone.
Archimedes is probably also the first mathematical physicist on record, and the best until Galileo and Newton. He invented the field of statics, enunciated the law of the lever, the law of equilibrium of fluids, and the law of buoyancy. He was the first to identify the concept of center of gravity, and he found the centers of gravity of various geometric figures, including triangles, paraboloids, and hemispheres, assuming the uniform density of their interiors. Using only ancient Greek geometry, he also gave the equilibrium positions of floating sections of paraboloids as a function of their height, a feat that would be challenging for a modern physicist using calculus.
Archimedes was also an astronomer. Cicero writes that the Roman consul Marcellus brought two devices back to Rome from the ransacked city of Syracuse. One device mapped the sky on a sphere and the other predicted the motions of the sun and the moon and the planets (an orrery). He credits Thales and Eudoxus for constructing these devices. For some time the truth of this legend was in doubt, but the retrieval from an ancient shipwreck in 1902 of the Antikythera mechanism, a device dated to 150 – 100 b.c.e.. has confirmed the probability that Archimedes possessed and constructed such devices. Pappus of Alexandria writes that Archimedes had written a practical book on the construction of such spheres entitled On Sphere-Making.
Writings by Archimedes
- On the Equilibrium of Planes (2 volumes)
- This scroll explains the law of the lever and uses it to calculate the areas and centers of gravity of various geometric figures.
- On Spirals
- In this scroll, Archimedes defines what is now called Archimedes' spiral, the first mechanical curve (curve traced by a moving point) ever considered by a Greek mathematician.
- On the Sphere and the Cylinder
- In this scroll Archimedes proves that the relation of the area of a sphere to that of a circumscribed straight cylinder is the same as that of the volume of the sphere to the volume of the cylinder (exactly 2/3).
- On Conoids and Spheroids
- In this scroll Archimedes calculates the areas and volumes of sections of cones, spheres, and paraboloids.
- On Floating Bodies (2 volumes)
- In the first part of this scroll, Archimedes spells out the law of equilibrium of fluids, and proves that water will adopt a spherical form around a center of gravity. This was probably an attempt at explaining the observation made by Greek astronomers that the Earth is round. His fluids were not self-gravitating: he assumed the existence of a point towards which all things fall and derived the spherical shape.
- In the second part, he calculated the equilibrium positions of sections of paraboloids. This was probably an idealization of the shapes of ships' hulls. Some of his sections float with the base under water and the summit above water, which is reminiscent of the way icebergs float.
- The Quadrature of the Parabola
- In this scroll, Archimedes calculates the area of a segment of a parabola (the figure delimited by a parabola and a secant line not necessarily perpendicular to the axis). The final answer is obtained by triangulating the area and summing the geometric series with ratio 1/4.
- This is a Greek puzzle similar to a Tangram, and may be the first reference to this game. Archimedes calculates the areas of the various pieces. Recent discoveries indicate that Archimedes was attempting to determine how many ways the strips of paper could be assembled into the shape of a square. This is possibly the first use of combinatorics to solve a problem.
- Archimedes' Cattle Problem
- Archimedes wrote a letter to the scholars in the Library of Alexandria, who apparently had downplayed the importance of Archimedes' works. In this letter, he challenges them to count the numbers of cattle in the Herd of the Sun by solving a number of simultaneous Diophantine equations, some of them quadratic (in the more complicated version). This problem was recently solved with the aid of a computer. The solution is a very large number, approximately 7.760271 × 10206544 (See the external links to the Cattle Problem.)
- The Sand Reckoner
- In this scroll, Archimedes counts the number of grains of sand fitting inside the universe. This book mentions Aristarchus of Samos' theory of the solar system, concluding that it is impossible, and contemporary ideas about the size of the Earth and the distance between various celestial bodies.
- The Method
- This work, which was unknown in the Middle Ages, but the importance of which was realized after its discovery, pioneers the use of infinitesimals, showing how breaking up a figure into an infinite number of infinitely small parts could be used to determine its area or volume. Archimedes probably considered these methods not mathematically precise, and he used these methods to find at least some of the areas or volumes he sought, and then used the more traditional method of exhaustion to prove them.
- ↑ Ship Shaking Device, Syracuse, 214 B.C.E. by Kristin Shutts and Anne-Sinclair Beauchamp.. e-museum, Smith College. Retrieved June 6, 2008.
- ↑ "Archimedes Death Ray" Experiment Results..MIT. Retrieved June 6, 2008.
- ↑ "Mythbusters" Discovery Channel.Episode 55: Steam Cannon/Breakfast Cereal.Retrieved June 6, 2008.
- ↑ Tomb of Archimedes Sources.. NYU Math Dept.Retrieved June 6, 2008.
- Archimedes; Sir Thomas Heath, (Translator). The Works of Archimedes. reprint ed. Dover Publications 2002. ISBN 0486420841
- Archimedes; Reviel Netz. The Works of Archimedes: Translation and Commentary. Cambridge University Press, 2004. ISBN 0521661609
- Dijksterhuis, E. J. Archimedes. Princeton, Princeton Univ. Press, 1987. ISBN 0691084211.
- Kliner, Fred S.; Mamiya, Kliner Christin J. . "Gardener's Art Through the Ages," twelfth ed. Vol II. Los Angeles: Thompson Wadsworth, 2005.
- Laubenbacher, Reinhard, and David Pengelley. Mathematical Expeditions: Chronicles by the Explorers. 1999. ISBN 0387984348
- Plutarch. Plutarch's Lives, translated by John Dryden. New York: Modern Library, ASIN: B000RS0LX6
- Stadter, Philip A. Plutarch's historical methods: An analysis of the Mulierum virtutes. Harvard University Press, 1965. ASIN: B0007DKTAG
Introduction for young adults
- Benkick, Jeanne. Archimedes and the Door to Science. Bethlehem Books, 1995. ISBN 1883937124
- Zannos, Susan. The Life and Times of Archimedes. (Biography from Ancient Civilizations) Mitchell Lane Publishers, 2004. ISBN 1584152427
All links retrieved October 29, 2012.
- Archimedes' Book of Lemmas at cut-the-knot
- Archimedes and the Rhombicuboctahedron by Antonio Gutierrez from Geometry Step by Step from the Land of the Incas.
- Archimedes Home Page
- MacTutor Biography, Archimedes
- Inside the Archimedes Palimpsest NOVA
- The Archimedes Palimpsest project The Walters Art Museum in Baltimore, Maryland
- Archimedes - The Golden Crown points out that in reality Archimedes may well have used a more subtle method than the one in the classic version of the story.
- Archimedes' Quadrature Of The Parabola Translated by Thomas Heath.
- Archimedes' On The Measurement Of The Circle Translated by Thomas Heath.
- Archimedes' Cattle Problem
- Archimedes' Cattle Problem
- Project Gutenberg, Archimedes, e-text
- Angle Trisection by Archimedes of Syracuse (Java)
- Archimedes'Triangle (Java)
- An ancient extra-geometric proof
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Archimedes | 13 |
120 | Stoichiometry and Balancing Reactions
Table of contents
Stoichiometry is a section of chemistry that involves using relationships between reactants and/or products in a chemical reaction to determine desired quantitative data. In Greek, stoikhein means element and metron means measure, so stoichiometry literally translated means the measure of elements. In order to use stoichiometry to run calculations about chemical reactions, it is important to first understand the relationships that exist between products and reactants and why they exist, which require understanding how to balanced reactions.
In chemistry, chemical reactions are frequently written as an equation, using chemical symbols. The reactants are displayed on the left side of the equation and the products are shown on the right, with the separation of either a single or double arrow that signifies the direction of the reaction. The significance of single and double arrow is important when discussing solubility constants, but we will not go into detail about it in this module. To balance an equation, it is necessary that there are the same number of atoms on the left side of the equation as the right. One can do this by raising the coefficents.
Reactants to Products
A chemical equation is like a recipe for a reaction so it displays all the ingredients or terms of a chemical reaction. It includes the elements, molecules, or ions in the reactants and in the products as well as their states, and the proportion for how much of each particle is create relative to one another, through the stoichiometric coefficient. The following equation demonstrates the typical format of a chemical equation:
2 Na (s) + HCl(aq) → 2NaCl (aq) + H2(g)
Chemical Symbols in an equation
In the above equation, the elements present in the reaction are represented by their chemical symbols. Based on the Law of Conservation of Mass, which states that matter is neither created nor destroyed in a chemical reaction, every chemical reaction has the same elements in its reactants and products, though the elements they are paired up with often change in a reaction. In this reaction, sodium (Na), hydrogen (H), and chloride (Cl) are the elements present in both reactants, so based on the law of conservation of mass, they are also present on the product side of the equations. Displaying each element is important when using the chemical equation to convert between elements.
In a balanced reaction, both sides of the equation have the same number of elements. The stoichiometric coefficient is the number written in front of atoms, ion and molecules in a chemical reaction to balance the number of each element on both the reactant and product sides of the equation. Though the stoichiometric coefficients can be fractions, whole numbers are frequently used and often preferred. This stoichiometric coefficients are useful since they establish the mole ratio between reactants and products. In the equation:
2 Na (s) + HCl(aq) → 2NaCl (aq) + H2(g)
we can determine that 1 mole of HCl will react with 2 moles of Na(s) to form 2 moles of NaCl(aq) and 1 mole of H2(g). If we know how many moles of Na we start out with, we can use the ratio of 2 moles of NaCl to 2 moles of Na to determine how many moles of NaCl were produced or we can use the ration of 1 mole of H2 to 2 moles of Na to convert to NaCl. This is known as the coeffient factor. The balanced equation makes it possible to convert information about one reactant or product to quantitative data about another element. Understanding this is essential to solving stoichiometric problems!
Lead (IV) hydroxide and sulfuric acid react as shown below. Balance the reaction. (Hint: Count the number of each element.)
Pb(OH)4 + H2SO4 → Pb(SO4)2 +H2O
Balancing reactions involves finding least common multiples between numbers of elements present on both sides of the equation. In general, when applying coefficients, add coeffients to the molecules or unpaired elements last.
A balanced equation ultimately has to satisfy two conditions.
- The numbers of each element on the left and right side of the equation must be equal.
- The charge on both sides of the equation must be equal. It is especially important to pay attention to charge when balancing redox reactions. (Refer to redox reactions for more details on this.)
Stoichiometry and Balanced Equations
In stoichiometry, balanced equations make it possible to compare different elements through the stoichiometric factor discussed earlier. This is the mole ratio between two factors in a chemical reaction found through the ratio of stoichiometric coefficients. Here is a real world example to show how stoichiometric factors are useful.
Example: A “real World Example”
There are 12 party invitations and 20 stamps. Each party invitation needs 2 stamps to be sent. How many party invitations can be sent?
Invitations Stamps Party Invitations Sent
The equation for this can be written as I + 2S =>IS2 where I represents invitations, S represents stamps, and IS2 represents the sent party invitations consisting of one invitation and two stamps. Based on this, we have the ratio of 2 stamps for 1 sent invite, based on the balanced equation.
In this example are all the reactants (stamps and invitations) used up? No, and this is normally the case with chemical reactions. There is often excess of one of the reactants. The limiting reagent, the one that runs out first, prevents the reaction from continuing and determines the maximum amount of product that can be formed.
Q: What is the limiting reagent in this example?
A: Stamps, because there was only enough to send out invitations, whereas there were enough invitations for 12 complete party invitations. Aside from just looking at the problem, the problem can be solved using stoichiometric factors.
12 I x (1IS2/1I) = 12 IS2 possible
20 S x (1IS2/2S) = 10 IS2 possible
When there is no limiting reagent because the ratio of all the reactants caused them to run out at the same time, it is known as stoichiometric proportions.
Types of Reactions
There are 6 basic types of reactions.
- Combustion: Combustion is a the formation of CO2and H2Oas the products upon reacting with O2
- Combination (synthesis): Combination is the addition of 2 or more simple reactants to form a complex product.
- Decomposition: Decomposition is when complex reactants are broken down into simpler products.
- Single Displacement: Single isplacement is when an element from on reactant switches with an element of the other to form two new reactants.
- Double Displacement: Double displacement is when two elements from on reactants switche with two elements of the other to form two new reactants.
- Acid-Base: Acid- base reactions are when two reactants form salts and water.
Before applying stoichiometric factors to chemical equations, you need to understand molar mass. Molar mass is a useful chemical ratio between mass and moles. The atomic mass of each individual element as listed in the periodic table established this relationship for atoms or ions. For compounds or molecules, you have to take the sum of the atomic mass X the number of each atom in order to determine the molar mass
Example 1: What is the molar mass of H2O?
Molar mass = 2(1.00794g/mol) + 1(15.9994g/mol) = 18.01528g/mol
Using molar mass and coeffiecient factors, it is possible to convert mass of reactants to mass of products or vice versa.
Example 2: Propane (C3H8) burns in this reaction: C3H8 + 5O2 ----> 4H2O + 3CO2
Q: If 200g of propane is burned, how many g of H2O is produced?
A: 327.27 g H2O
Steps to getting this answer: Since you cannot calculate from grams of reactant to grams of products you must convert from grams of C3H8 to moles of C3H8 then from mol of C3H8 to moles of H2O. Then convert from moles of H2O to grams of H2O.
Step 1: 200 g C3H8 is equal to 4.54 mol C3H8.
Step 2: Since there is a ratio of 4:1 of H2O to C3H8 for every 4.54 molC3H8 there are 18.18 molH2O.
Step 3: Convert 18.18 mol H2O to g H2O. 18.18 mol H2O is equal to 327.27 g H2O.
Variation in Stoichiometric Equations
Almost every quantitative relationship can be converted into a ratio that can be useful in data analysis.
Density is calculated as mass/volume. This ratio can be useful in determining the volume of a solution, given the mass or useful in finding the mass given the volume. In the latter case, the inverse relationship would be used. (In conversions, be aware of units. Make sure unwanted units always cancel out in your work.)
Volume x (Mass/Volume) = Mass
Mass x (Volume/Mass) = Volume
Percents establish a relationship as well. A percent mass states how many grams of a mixture are of a certain element or molecule. The percent X% states that of every 100grams of a mixture, X grams are of the stated element or compound. This is useful in determining mass of a desired substance in a molecule.
Example: A substance is 5% carbon by mass. If the total mass of the substance is 10grams, what is the mass of carbon in the sample? How many moles of carbon are there?
10g sample x (5g carbon/100g sample) = 0.5g carbon
0.5g carbon x (1mol carbon/12.011g carbon) = 0.0416 mol carbon
Molarity (moles/L) establishes a relationship between moles and liters. Given volume and molarity, it is possible to calculate mole or use moles and molarity to calculate volume. This is useful in chemical equations and dilutions.
Example: How much 5M stock solution is needed to prepare 100 mL of 2M solution?
100mL of dilute sol'n (1L/1000mL)(2mol/1Lsolution)(1L stock solution/5mol solution)(1000ml stock solution/1L stock solution) = 40 mL stock solution
These ratios of molarity, density, and mass percert are useful in complex examples ahead.
Determining Empirical Formulas
An empirical formula can be determined through chemical stoichiometry by determining which elements are present in the molecule and in what ratio. The ratio of elements is determined by comparing the number of moles of each element present.
1.000 gram of an organic molecule burns completely in the presense of excess oxygen. It yields 0.0333mol of CO2 and 0.599 g of H2O. What is the empirical formula of the organic molecule?
This is a combustion reaction. The problem requires that you know that organic molecules consist of some combination of carbon, hydrogen, and oxygen elements. With that in mind, write the chemical equation out, replacing unknown numbers with variables. Do not worry about coefficients here.
CxHyOz(g) + 02(g) => CO2(g) + H2O(g)
Since all the moles of C and H in CO2 and H2O, respectively have to have came from the 1 gram sample of unknown, start by calculating how many moles of each element were present in the unknown sample.
0.0333mol CO2 (1mol C/ 1mol CO2) = 0.0333mol C in unknown
0.599g H2O (1mol H2O/ 18.01528g H2O)(2mol H/ 1mol H2O) = 0.0665 mol H in unknown
Calculate the final moles of oxygen by taking the sum of the moles of oxygen in CO2 and H2O. This will give you the number of moles from both the unknown organic molecule and the O2 so you must subtract the moles of oxygen transfered from the O2.
Moles of oxygen in CO2:
0.0333mol CO2 (2mol O/1mol CO2) = 0.0666 mol O
Moles of oxygen in H2O:
0.599g H2O (1mol H2O/18.01528 g H2O)(1mol O/1mol H2O) = 0.0332 mol O
Using the Law of Conservation, we know that the mass before a reaction must equal the mass after a reaction. With this we can use the difference of the final mass of products adn intitial mass of the unknown organic molecule to determine the mass of the O2 reactant.
0.333mol CO2(44.0098g CO2/ 1mol CO2) = 1.466g CO2
1.466g CO2 + 0.599g H2O - 1.000g unknown organic = 1.065g O2
Moles of oxygen in O2
1.065g O2(1mol O2/ 31.9988g O2)(2mol O/1mol O2) = 0.0666mol O
Moles of oxygen in unknown
(0.0666mol O + 0.0332 mol O) - 0.0666mol O = 0.0332 mol O
Construct a mole ratio for C, H, and O in the unknown and divide by the smallest number.
(1/0.0332)(0.0333mol C : 0.0665mol H : 0.0332 mol O) => 1mol C: 2 mol H: 1 mol O
From this ratio, the empirical formula is calculated to be CH2O.
Complex Stoichiometry Problems
Example: An amateur welder melts down two metals to make an alloy that is 45% copper by mass and 55% iron(II) by mass. The alloy's density is 3.15g/L. One liter of alloy completely fills a mold of volume 1000cm3. He accidently breaks off a 1.203cm3 piece of the homogenous mixture and sweeps it outside where it reacts with acid rain over years. Assuming the acid reacts with all the iron(II) and not with teh copper, how many grams of H2(g) are released into the atmosphere because of the amateur's carelessness? (Note that the situation is fiction.)
Step 1: Write a balanced equation after determining the products and reactants. In this situation, since we assume copper does not react, the reactants are only H+(aq) and Fe(s). The given product is H2(g) and based on knowledge of redox reactions, the other product must be Fe2+(aq).
Fe(s) + 2H+(aq) => H2(g) + Fe2+(aq)
Step 2: Write down all the given information
Alloy density = (3.15g alloy/ 1L alloy)
x grams of alloy = 45% copper = (45g Cu(s)/100g alloy)
x grams of alloy = 55% iron(II) = (55g Fe(s)/100g alloy)
1 liter alloy = 1000cm3 alloy
alloy sample = 1.203cm3 alloy
Step 3: Answer the question of what is being asked. The question asks how much H2(g) was produced. You are expected to solve for the amount of product formed.
Step 4: Start with the compound you know the most about and use given ratios to convert it to the desired compound.
Convert the given amount of alloy reactant to solve for the moles of Fe(s) reacted.
1.203cm3 alloy(1liter alloy/1000cm3 alloy)(3.15g alloy/1liter alloy)(55g Fe(s)/100g alloy)(1mol Fe(s)/55.8g Fe(s))=3.74 x 10-5 mol Fe(s)
Make sure all the units cancel out to give you moles of Fe(s). The above conversion involves using multiple stoichiometric relationships from density, percent mass, and molar mass.
The balanced equation must now be used to convert moles of Fe(s) to moles of H2(g). Remember that the balanced equation's coeffiecients state the stoichiometric factor or mole ratio of reactants and products.
3.74 x 10-5 mol Fe (s) (1mol H2(g)/1mol Fe(s)) = 3.74 x 10-5 mol H2(g)
Step 5: Check units
The question asks for how many grams of H2(g) were released so the moles of H2(g) must still be converted to grams using the molar mass of H2(g). Since there are two H in each H2, its molar mass is twice that of a single H atom.
molar mass = 2(1.00794g/mol) = 2.01588g/mol
3.74 x 10-5 mol H2(g) (2.01588g H2(g)/1mol H2 (g)) = 7.53 x 10-5 g H2(g) released
Own Your Own Practice
Stoichiometry and balanced equations make it possible to use one piece of information to calculate another. There are countless ways stoichiometry can be used in chemistry and everyday life. Try and see if you can use what you learned to solve the following problems.
1) Why are the following equations not considered balanced?
a. H2O(l) => H2(g) + O2(g)
b. Zn(s) + Au+(aq) => Zn2+(aq) + Ag(s)
2) Hydrochloric acid reacts with a solid chunk of aluminum to produce hydrogen gas and aluminum ions. Write the balanced chemical equation for this reaction.
3) Given a 10.1M stock solution, how many mL must be added to water to produce 200mL of 5M solution?
4) If 0.502g of methane gas react with 0.27g of oxygen to produce carbon dioxide and water, what is the limiting reagent and how many moles of water are produced? The unbalanced equation is provided below.
CH4(g) + O2(g) => CO2(g) + H2O(l)
5) A 0.777g sample of an organic compound is burned completely. It produces 1.42g CO2 and 0.388g H2O. Knowing that all the carbon and hydrogen atoms in CO2 and H2O came from the 0.777g sample, what is the empirical formula of the organic compound?
Weblinks for further reference
- 1. Refer to http://chemistry.about.com/cs/stoich.../aa042903a.htm as an outside resource on how to balance chemical reactions.
- 2. Refer to http://www.learnchem.net/tutorials/stoich.shtml as an outside resource on stoichiometry.
- T. E. Brown, H.E LeMay, B. Bursten, C. Murphy. Chemistry: The Central Science. Prentice Hall, January 8, 2008.
- J. C. Kotz P.M. Treichel, J. Townsend. Chemistry and Chemical Reactivity. Brooks Cole, February 7, 2008.
- Petrucci, harwood, Herring, Madura. General Chemistry Principles & Modern Applications. Prentice Hall. New Jersey, 2007.
- Joseph Nijmeh (UCD)
This page viewed 54100 times
The ChemWiki has 9257 Modules. | http://chemwiki.ucdavis.edu/Analytical_Chemistry/Chemical_Reactions/Stoichiometry_and_Balancing_Reactions | 13 |
67 | A variety of notations are used to denote the time derivative. In addition to the normal (Leibniz's) notation,
two very common shorthand notations are also used: adding a dot over the variable, (Newton's notation), and adding a prime to the function, (Lagrange's notation). These two shorthands are generally not mixed in the same set of equations.
Higher time derivatives are also used: the second derivative with respect to time is written as
with the corresponding shorthands of and .
As a generalization, the time derivative of a vector, say:
is defined as the vector whose components are the derivatives of the components of the original vector. That is,
Use in physics
Time derivatives are a key concept in physics. For example, for a changing position , its time derivative is its velocity, and its second derivative with respect to time, , is its acceleration. Even higher derivatives are sometimes also used: the third derivative of position with respect to time is known as the jerk. See motion graphs and derivatives.
A large number of fundamental equations in physics involve first or second time derivatives of quantities. Many other fundamental quantities in science are time derivatives of one another:
- force is the time derivative of momentum
- power is the time derivative of energy
- electrical current is the time derivative of electric charge
and so on.
A common occurrence in physics is the time derivative of a vector, such as velocity or displacement. In dealing with such a derivative, both magnitude and orientation may depend upon time.
Example: circular motion
For example, consider a particle moving in a circular path. Its position is given by the displacement vector r = [x,y], related to the angle, θ, and radial distance, ρ, as defined in Figure 1:
For purposes of this example, time dependence is introduced by setting θ = t. The displacement (position) at any time t is then:
The second form shows the motion described by r(t) is in a circle of radius ρ because the magnitude of r(t) is given by
using the trigonometric identity sin2(t) + cos2(t) = 1.
With this form for the displacement, the velocity now is found. The time derivative of the displacement vector is the velocity vector. In general, the derivative of a vector is a vector made up of components each of which is the derivative of the corresponding component of the original vector. Thus, in this case, the velocity vector is:
Thus the velocity of the particle is nonzero even though the magnitude of the position (that is, the radius of the path) is constant. The velocity is directed perpendicular to the displacement, as can be established using the dot product:
Acceleration is then the time-derivative of velocity:
The acceleration is directed inward, toward the axis of rotation. It points opposite to the position vector and perpendicular to the velocity vector. This inward-directed acceleration is called centripetal acceleration.
Use in economics
In economics, many theoretical models of the evolution of various economic variables are constructed in continuous time and therefore employ time derivatives. See for example exogenous growth model and ch. 1-3. One situation involves a stock variable and its time derivative, a flow variable. Examples include:
- The flow of net fixed investment is the time derivative of the capital stock.
- The flow of inventory investment is the time derivative of the stock of inventories.
- The growth rate of the money supply is the time derivative of the money supply divided by the money supply itself.
Sometimes the time derivative of a flow variable can appear in a model:
- The growth rate of output is the time derivative of the flow of output divided by output itself.
- The growth rate of the labor force is the time derivative of the labor force divided by the labor force itself.
And sometimes there appears a time derivative of a variable which, unlike the examples above, is not measured in units of currency:
- The time derivative of a key interest rate can appear.
- The inflation rate is the growth rate of the price level—that is, the time derivative of the price level divided by the price level itself.
See also
- Chiang, Alpha C., Fundamental Methods of Mathematical Economics, McGraw-Hill, third edition, 1984, ch. 14, 15, 18.
- Romer, David, Advanced Macroeconomics, McGraw-Hill, 1996. | http://en.wikipedia.org/wiki/Time_derivative | 13 |
61 | Grade Levels: 3 - 7
Math students in middle school will use estimation to approximate values, angle, and area measurements of a triangle.
Explain to students that they are going to work as a class to estimate the measurements of several angles and compare the estimates with measured values. Then, students will work in groups of four to estimate a traingle's angles and area. Explain that this lesson covers two benchmark units, degrees and centimeters.
Draw two triangles on the chalkboard, and write the base and height for each: first triangle, height = 732 and base = 1239; second triangle, height = 128 and base = 985. Have students select an acute angle from the first triangle, and show them that they can visualize whether the angle is less than or greater than 90 degrees. Then have them determine if the angle is less than or greater than 45 degrees. This will help them narrow the angle's range to 45 degrees (0-45 or 45-90). If the angle is less than 45 degrees, students can determine whether the angle is closer to 0 or 45 degrees. Guide them through this process for the first triangle, and then repeat the process for the second triangle. Prompt them with questions about the angle's relation to 0, 45, 90, 135, and 180 degrees to help them narrow the acceptable range, and then have them make their estimate. Finally, have students measure the actual angles and compare the estimates with measured values.
Have students estimate the area for each triangle by estimating the product dictated by the formula for the area of a triangle (area = [base/2] x height) and document their process in their notebooks. Explain that they should choose numbers that are close to the originals, but are easier to work with. For example, with the parameters given for the first triangle, a student might say,
"The base is 1239, which is very close to 1200, so I will divide that by 2 to get 600.
600 x 732 is difficult to calculate, but 732 is very close to 700, and
600 x 700 = 420,000."
Have students calculate the actual area with the exact measurements and compare these measurements to their estimates. The actual area of the first triangle is 453,474, so the estimate is only off by 7% (453,474 - 420,000/453,474 = .07 = 7%).
Divide the class into groups of four. Have three of the students face one another, and provide them with a string approximately 10 feet in length. Have each student tie his or her string to another student's so that they form a triangle. Have students estimate the angle they create at their vertex (the point at which they hold the string). Ask the fourth student to record the estimates on a sheet of paper, add the three estimates, and compare the sum to 180 degrees. The sum of the angles of a triangle is equal to 180 degrees, so the addition of the estimates should be somewhere between 170 and 190 degrees. If the sum of the estimates is outside the acceptable range, discuss possible reasons why. Next, have the fourth student use a protractor to measure each of the angles and record the actual angle measurements. The fourth student should share this information with the rest of the group.
Have the four students estimate the base and height of the triangle in centimeters and then estimate the area by performing the calculation in their heads. They should use estimation techniques for the base and height of the triangle as well as for the area.
For example, if the estimate for the base is 68 centimeters, and the estimate for the height is 81 centimeters, students might estimate 68 x 81 is similar to 70 x 80 so the estimated area is (70/2) x 80, or 2800.
Emphasize that the best way to estimate the product of two numbers is to either round the numbers up or down or to use a substitute number that is easier to work with. Answers will not be exact, but the estimates should be reasonable. Have the fourth student record all estimates and then measure the base and height of the triangle (in centimeters). Finally, have the fourth student calculate the area and compare the actual value to the estimates.
Have students write a short paragraph that describes how they arrived at their estimates for the triangle's angles and area. Collect their paragraphs, and evaluate their understanding of the estimation process. As a final evaluation, have students draw two triangles with different measurements on one sheet of paper. Have them estimate both triangles' angles and areas. They should provide estimates for the lengths of all sides as well as a computational estimate of the area. Evaluate the estimates to determine if students are able to estimate proportionally. For example, if one side is obviously longer than another, be sure estimates reflect that. For the angle measurements, evaluate students' ability to estimate angles and their relation to well-known angles in addition to how close the sum of the estimates is to 180 degrees. For the estimate of the area, evaluate students' choices of suitable alternate numbers with which perform computational estimates.
© 2000-2013 Pearson Education, Inc. All Rights Reserved. | http://www.teachervision.fen.com/geometry/lesson-plan/48942.html?for_printing=1 | 13 |
55 | Plain text refers to any string (i.e., finite sequence of characters) that consists entirely of printable characters (i.e., human-readable characters) and, optionally, a very few specific types of control characters (e.g., characters indicating a tab or the start of a new line).
A character is any letter, symbol or mark employed in writing or printing a written language (i.e., a language used by humans for which a writing system has been developed). The characters used to write the English language are the 26 lower case (i.e., small) and the 26 upper case (i.e., capital) letters of the English alphabet, the Arabic numerals, punctuation marks and a variety of other symbols (e.g., the ampersand, the equals sign, the tilde and the at symbol). An alphabet is the ordered, standardized set of letters that is used to write or print a written language.
Plain text usually refers to text that consists entirely of the ASCII printable characters and a few of its control characters. ASCII, an acronym for American standard code for information interchange, is based on the characters used to write the English language as it is used in the U.S. It is the de facto standard for the character encoding (i.e., representing characters by numbers) that is utilized by computers and communications equipment to represent text, and it (or some compatible extension of it) is used on most computers, including nearly all personal computers and workstations.
The term printable characters refers to the 96 ASCII characters (inclusive of the space character, which occupies a single space) that are actually human readable when displayed on a computer screen or when printed on paper. ASCII also contains a substantial number of non-printable characters (i.e., control characters) that were originally intended to control devices (e.g., printers) that make use of ASCII. It does not contain any characters that represent the formatting of text, such as those that indicate the typeface, the font, underlining, margins, etc.
A typeface is a specific design for the entire set of characters that is used to write a language or languages. Among the most popular of the thousands that are available for the English language are Helvetica, Times Roman and Courier. Each typeface contains numerous fonts. A font is an implementation of a typeface for a specific size and style (e.g., plain, bold or italic) of type.
Plain text can also be defined in terms of other character encoding systems. For example, plain Unicode text is a sequence of Unicode characters. Unicode is a newer system that attempts to provide a unique encoding for every character used by the world's languages and which incorporates ASCII as a subset. Thus, plain Unicode text could include human-readable characters from almost any language or combination thereof (e.g., a mixture of Chinese, Russian and English characters as might be used in a trilingual dictionary).
Source code (i.e., the original form of any computer program) is typed into a computer in ASCII plain text by humans using any of thousands programming languages (among the most common of which are C, C++ and Java). When the source code files have been converted into object code by a compiler, they are no longer plain text, but rather binary files. A binary file is a file that can be directly read by a computer's CPU (central processing unit); it contains at least some data that is not plain text and is thus generally not readable by humans.
The only formatting possible for plain text is that which can be created with the space, tab and new line characters. Thus, for example, new lines and new paragraphs can be created, and vertical spaces can be added between lines and between paragraphs. There is no variation in the typeface or font, no underlining, no italic or bold characters and no superscripts or subscripts. Likewise, plain text does not contain any images or hyperlinks (i.e., automated cross-references to other documents).
However, plain text can contain instructions that are written in plain text for formatting, for adding images, for creating hyperlinks, etc. that can be used by programs that convert plain text into other forms. That is, it can contain tags (i.e., instructions or indicators that are written in plain text) that tell a word processor, web browser or other program to format it in a certain way, including which typefaces and fonts to use, how to set the margins, where to underline the text and where to use bold or italic characters.
HTML (hypertext markup language) and XML (extensible markup language) are good examples of the use of instructions that (1) are used to convert plain text into some form of formatted text, (2) are written in plain text and (3) are embedded in the plain text documents that they are used to format. For example, the HTML tags <b> and </b>, although written in plain text, instruct any web browser that reads a file containing them to render (i.e., display) any plain text located between them in bold characters. Among the many other things that HTML can tell browsers are where to create hyperlinks, how to set margins, which images to use and where to insert them, which typefaces and fonts to use and where to render text in italics or underlined characters.
Rich text, also referred to as styled text, consists of plain text plus additional information in binary format, such as about fonts, language identifiers and margins.
Plain text should not be confused with plaintext (a single word instead of two). The latter is a term used in cryptography (i.e., the converting of information into an unreadable format) that refers to a plain text message prior to encryption or after decryption, that is, a message in human-readable form.
Plain text offers some important advantages over other ways of storing and manipulating data. They revolve around the fact that it is the most flexible and portable format for data. That is, everything can be done with plain text that could be done with any binary format, and some things can be done with plain text that cannot easily (if at all) be done with some binary formats. This is because plain text is supported by nearly every application program on every operating system and on every type of CPU and allows information to be manipulated (including, searching, sorting and updating) both manually and programmatically using virtually every text processing tool in existence.
This flexibility and portability make plain text the best format for storing data persistently (i.e., for years, decades, or even millennia). That is, plain text provides insurance against the obsolescence of any application programs that are needed to create, read, modify and extend data. Human-readable forms of data (including data in self-describing formats such as HTML and XML) will most likely survive longer than all other forms of data and the application programs that created them. In other words, as long as the data itself survives, it will be possible to use it even if the original application programs have long since vanished.
For example, it is very easy to read a data file from a legacy system (i.e., an antiquated program or operating system) or convert it to some other format even if there is little or no information about the original program that was used to create it, if that data file is written in plain text. If it is written in some binary format, such as by a proprietary (i.e., commercial) word processor or spreadsheet program, it might be very difficult or impossible to read or use it.
Plain text is not necessarily unstructured text. Programming languages as well as SGML (standard generalized markup language) and its modern descendants, most notably HTML (hypertext markup language) and XML (extensible markup language), are examples of plain text formats that have well-defined structures. These formats have the important advantage of making plain text easier for computers to read, reorganize and modify while keeping it relatively readable by humans.
The use of plain text is an important part of the Unix philosophy, and thus of the Linux philosophy (which incorporates the Unix philosophy). Consequently, in contrast to other types of operating systems, Linux and other Unix-like operating systems attempt to use plain text as much as possible and to minimize the use of binary code.
For example, programs are designed to produce plain text output to the extent practical. An obvious example of a type of program whose primary output cannot be plain text is a compiler, because its purpose is to translate plain text (i.e., source code) into binary code (i.e., runnable programs that can be read directly by the CPU).
All filters use plain text input and produce plain text output. Filters, which are among the most important programs in Unix-like operating systems, are small and (usually) specialized programs that transform plain text data in some meaningful way. They are designed to be linked together using pipes (represented in commands by the vertical bar character) to form pipelines of commands that can have great power and flexibility.
Also, Unix-like operating systems use plain text files (i.e., files that contain only plain text and no binary data) for system and application configuration information. A major advantage of this approach is ease of access and modification, which can be particularly useful when repairing a crashed or otherwise damaged system. Examples of plain text configuration files include /etc/fstab (which lists the currently mountable filesystems), /etc/passwd (which holds user account data) and /etc/httpd.conf (which is the configuration file for the highly popular Apache web server).
Some operating systems, such as Solaris, the most popular proprietary Unix-like operating system, maintain a binary version of certain system databases in addition to the plain text version as a means of optimizing system performance. The plain text version is retained as a human interface to the binary version, i.e., in order to be able to easily read and modify it.
Despite these advantages, the use of plain text for configuration files and data storage varies greatly according to the operating system and application program.
One disadvantage that has sometimes been claimed for plain text is that it can consume more storage (e.g., hard disk or magnetic tape) space than would a compressed binary format. Another is that it might be computationally more expensive (i.e., require more CPU time) to interpret and process than binary files. However, both of these supposed disadvantages have declined in importance as a result of the rapid reduction in the cost of storage and the rise in processing speeds.
Developers have sometimes expressed concern that the keeping metadata (i.e., data about data, including formatting information) in plain text form could expose to it accidental or malicious damage by users. However, although binary data is certainly far more obscure (i.e., difficult to read) than plain text, it is not necessarily more secure. Indeed, there are very effective ways of protecting metadata while still using plain text, such as employing a secure hash of the data and including it in the plain text as a checksum. Plain text can, of course, also be made very obscure if desired through the use of encryption.
A hash, also called a hash function, is an algorithm (i.e., a set of precise, unambiguous rules that specify how to solve some problem or perform some task) or mathematical formula that converts data of any length into a unique, short and fixed-length string of plain text characters, known as a hash value or message digest. A hash function is a one-way function; that is, it easily calculates a hash value but, conversely, it is extremely difficult or impossible to reverse the process and reproduce the original data from the hash value.
Proprietary software generally does not use plain text for configuration files and for storing data, but rather it almost always uses some binary format. This is often an attempt by software developers and vendors to lock in existing users of such software, i.e., to make it difficult and costly for them to convert their existing data to any competing file format, including plain text.
There is a large number and a great variety of programs that can produce output in plain text form. The simplest and most basic are text editors, which are small programs that are designed specifically to create, read and edit plain text.
A pure text editor deals only with plain text and, in contrast to a word processor, is not designed to format text. At least one free text editor is included as a basic part of virtually every operating system. Among the most popular on Unix-like operating systems are vi, gedit and kedit. Emacs, which is often preferred by programmers, has a text editor function, but it also has a number of advanced capabilities that allow it to even be used for compiling programs and browsing the Web.
Examples of free text editors for other operating systems are SimpleText, which was included with the Macintosh prior to OS X, and Notepad, which is included with the Microsoft Windows operating systems. Caution should be exercised regarding the use of text editors for other operating systems because some of them, such as Notepad, do not treat Unix-style text files correctly and thus can cause programs for Unix-like operating systems to malfunction.
Word processors can also be used to read, write and edit plain text. But they can additionally be used with a variety of other text formats, including various proprietary formats (both those that are native to that particular word processor and those that are compatible with other word processors) and rich text. Several word processors can also create PDF (portable document format) documents.
There is a trend towards using plain text as an output format for all types of programs, not just text-oriented programs. This trend is even affecting some art programs, particularly free art programs which store their output as scalable vector graphics (SVG). SVG is an XML language (and thus plain text, as is SVG) for describing two-dimensional vector graphics (both static and animated) that makes it much easier to modify such graphics, including selectively transforming and regrouping parts of them, than do most conventional graphics formats. This is another example of the great flexibility of plain text.
Binary files are often converted into a plain text representation in order to improve their survivability during transit over the Internet or other networks. This is accomplished using encoding schemes such as Base64, which automatically converts all non-text e-mail data (e.g., images and attachments) into a 65-character subset of ASCII; the data is then converted back into its original form after arrival.
Created February 15, 2005. Last updated February 9, 2007. | http://www.linfo.org/plain_text.html | 13 |
77 | Download lesson as Word document, click: Lesson 3
Mass and Moles
Objective: To explore different types of mass of elements, and be introduced to the measurement of moles.
Relative Isotopic Mass
This is the mass of an atom of an Isotope compared with 1/12 of the mass of an atom of Carbon-12.
Example: Oxygen-16 has a relative Isotopic mass of 16.0, Sodium-23 has a relative Isotopic mass of 23.0.
Relative Atomic Mass (Ar)
The relative atomic mass is the weighted mean mass of an atom of an element, this is calculated using the different masses and relative abundances of all the Isotopes of a particular element.
If I am told that 75% of Chlorine atoms have an atomic mass of 35, and 25% have an atomic mass of 37, I can calculate the relative atomic mass.
(35×75) + (37×25) ÷ 100 = 35.5
35.5 is therefore the relative atomic mass of Chlorine, an average taken using the types and amounts of other Isotopes of the element.
24 25 26
Mg Mg Mg
12 12 12
78.6% 10.1% 11.3% < abundance of each Isotope of the element magnesium.
(24 x 78.6) + (25 x 10.1) + (26 x 11.3)
________________________________ = 24.3
As you can see Mg 24 is the most common / abundant Isotope, so the final average is nearest to 24. Relative atomic mass allows for the most accurate average mass of a particular element.
What is the difference between Relative Isotopic Mass and Relative Atomic Mass?
This is simple; Relative Atomic Mass takes into account ALL the Isotopes of a particular element (as demonstrated above), whereas Relative Isotopic Mass means the mass of just ONE Isotope of a particular element.
Example: Chlorine-35 has relative ISOTOPIC mass of 35. Chlorine-37 has a relative isotopic mass of 37. But if we want the relative ATOMIC mass, we must add the isotopes, multiply by abundance, and divide by one hundred to find an average mass (this exact question was covered above).
Relative Formula Mass
Also known as relative molecular mass (Mr). Relative Formula Mass is the weighted mean mass of a molecule (compared with 1/12 of the mass of an atom of Carbon-12).
So, where Relative Atomic Mass dealt with an atom of a whole element, Relative Formula Mass deals with the mass of a molecule (a molecule is made up of two or more chemically bonded atoms).
Many elements and compounds are made up of simple molecules like N2, O2 or CO2
Compounds with giant structures do not exist as molecules, for example: Ionic compound NaCl, or covalent compound SiO2, so ‘relative formula mass’ is seen as more accurate than saying molecular mass.
Remember: Atoms, Molecules, and Compounds;
“A molecule is formed when two or more atoms join together chemically. A compound is a molecule that contains at least two different elements. All compounds are molecules but not all molecules are compounds.
Molecular hydrogen (H2), molecular oxygen (O2) and molecular nitrogen (N2) are not compounds because each is composed of a single element. Water (H2O), carbon dioxide (CO2) and methane (CH4) are compounds because each is made from more than one element. The smallest bit of each of these substances would be referred to as a molecule. For example, a single molecule of molecular hydrogen is made from two atoms of hydrogen while a single molecule of water is made from two atoms of hydrogen and one atom of oxygen.” http://education.jlab.org/qa/compound.html
To calculate the relative formula mass (Mr) of a substance all you have to do is add up the relative atomic masses of all the elements.
H2O Water there fore has a relative formula mass of 18.
Hydrogen has an atomic mass of 1, there are 2 atoms of Hydrogen in water so multiply by 2.
Oxygen has an atomic mass of 16, so 16 + 2 = 18.
What is the relative formula mass of K2CO3 ?
K2 – 1 atom of Potassium has atomic mass of 39.1, but there are 2 atoms here so x2 is 78.2
C – see on the periodic table, atomic mass 12
O – We already established has atomic mass 16, but there are 3 atoms of it here so x3 is 48
78.2 + 12 + 48 = 138.2
Introduction to the mole
The mole is a measure of the amount of a substance.
1 mole of an element is equal to that particular element’s relative formula mass or relative atomic mass. (1mol = RFM and RAM)
Example: 1 mole of oxygen has a molar mass/relative formula mass of 16gmol-1 (the unit gmol-1 literally means grams per mole, so there is 16g per mole of oxygen, 1 mole of oxygen has a mass of 16g)
As we know 16 is oxygen’s relative formula mass, and its relative atomic mass listed on the periodic table.
If we have 2 moles of oxygen, then its ‘molar mass’ (same as relative formula mass) would be 32gmol-1
But what if we are given the weight of an element in grams and told to calculate the amount of moles?
If I have 50g of Oxygen, all I need to do is divide that by Oxygen’s molar mass/relative formula mass (16gmol-1) and I get 3.125 moles. (Test this by multiplying 3.125mol by 16gmol-1).
This is all illustrated in the calculation formula triangle below:
1 mole of a substance contains 6.02×1023 particles/atoms, this number is known as Avogadro’s number and you can’t escape it in chemistry.
A book I highly recommend is Calculations in AS / A Level Chemistry by Jim Clark, goes into detail with moles but keeps it simple and fun, with plenty of practice questions. The book is not just about moles, it’s everything, and has been essential. You can get a second hand version pretty cheap here: http://www.amazon.co.uk/Calculations-AS-A-Level-Chemistry/dp/0582411270/ref=sr_1_cc_1?s=aps&ie=UTF8&qid=1335368574&sr=1-1-catcorr
The bottom row multiplies, the top row divides. So…
moles x molar mass = mass
molar mass x moles = mass (same thing)
mass ÷ moles = molar mass
mass ÷ molar mass = moles
It’s very straightforward, so the above may seem patronising but it’s just in case anyone finds triangles tricky. The other formula triangles are below. | http://ruthlearns.wordpress.com/ | 13 |
122 | From Math Images
A Taylor series, or Taylor polynomial, is a function's polynomial expansion that approximates the value of this function around a certain point. For example, the animation at right shows the function y = sin(x) and its expanded Taylor series around the origin:
- with n varying from 0 to 36. As we can see, the larger n is, the more terms we will have in the Taylor polynomial, and the more it looks like the original function. If n goes to infinity, then our approximating polynomial will be identical to the original function y = sin(x).
- For the math behind this, please go to the More Mathematical Explanation section.
Have you ever wondered how calculators work? How do they calculate square roots, sines, cosines, and exponentials? For example, if we type the sine of an angle into our calculator, then it will magically spit out a number. We know this number must be related to our input in some way, but what exactly is this relationship? Is the calculator just reading off of a list created from people who used rulers to physically measure the distance on a graph, or is there a more mathematical relationship?
The answer to the last question above is yes. There are algorithms that give an approximate value of sine, using only the four basic operations (+, -, x, /). Mathematicians studied these algorithms in order to calculate these functions manually before the age of electronic calculators. One such algorithm is given by the Taylor series, named after English mathematician Brook Taylor. Basically, Taylor said that there is a way to expand any infinitely differentiable function into a polynomial series around a certain point. This process uses a fair amount of single variable calculus, which will be explained in the More Mathematical Explanation section. Here we will only give some examples of Taylor series without explanation:
- , expanded around the origin. x is in radians.
- , expanded around the origin. x is in radians.
- , expanded around the origin. e is Euler's number, approximately equal to 2.71828···
- , expanded around the point x = 1.
first we need to convert degrees to radians in order to use the Taylor series:
then, substitute into the Taylor series of cosine above:
Here we only used 3 terms, since this should be enough to tell us something. Notice that the right side of the equation above involves only the four simple operations, so we can easily calculate its value:
On the other hand, trigonometry tells us the exact numerical value of this particular cosine:
So our approximating value agrees with the actual value to the fourth digit, which is good accuracy for a 3-term-long approximation. Of course, better accuracy can be achieved by using more terms in the Taylor series.
We can get the same conclusion if we graph the original cosine function and its approximation together as shown in Figure 1-b. We can see that the original function and the approximating Taylor series are almost identical when x is small. In particular, the line x = π/6 cuts the two graphs almost simultaneously, so there is not much difference between the exact value and the approximating value. However, this doesn't mean that these two functions are exactly the same. For example, when x grows larger, they start to deviate significantly from each other. What's more, if we zoom in the graph at the intersection point, as shown in Figure 1-c, we can see that there is indeed a tiny difference between these two functions, which we cannot see in a graph of normal scale.
The calculator's algorithm is an improved version of this method. It may be more efficient, more accurate, and more general, but it still evaluates the numerical value of a polynomial series. This algorithm is built in the permanent memory (ROM) of electronic calculators, and is triggered every time we enter the function.
A More Mathematical Explanation
- Note: understanding of this explanation requires: *Calculus
How to derive Taylor Series from a given function
In this subsection, we are going to derive an explicit and general expression of a function's Taylor series, using only the derivatives of the given function f(x).
Mathematically, Taylor polynomials and Taylor series can be defined in the following way:
- The Taylor polynomial of degree n for f at a, written as , is the polynomial that has the same 0th to nth order derivatives as function f(x) at point a. In other words, the nth degree Taylor polynomial must satisfy:
- (the 0th order derivative of a function is just itself)
- in which is the kth order derivative of at a.
- The Taylor series is just with infinitely large degree n. Notice the f must be infinitely differentiable in order to have a Taylor series.
The following set of images show some examples of Taylor polynomials, from 0th order to 2nd order:
From the definition above, the function f and its 0th order Taylor polynomial must have the same 0th order derivatives at a. Since the 0th order derivative of a function is just itself by definition, we have:
which gives us the horizontal line shown in Figure 2-a. This is certainly not a very close approximation. So we need to add more terms.
The first order Taylor polynomial must satisfy:
which gives us the linear approximation shown in Figure 2-b. This approximation is much better than the 0th order one.
Similarly, the second degree Taylor polynomial must satisfy:
which gives us the quadratic approximation shown in Figure 2-c. This is the best approximation so far.
As we can see, the quality of our approximation increases as we add more terms to the Taylor polynomial. Since Taylor series is the Taylor polynomial of infinitely large degree, it should be a perfect approximation - identical to the original function.
Taylor proved that such a series must exist for every infinitely differentiable function f. In fact, without loss of generality, we can write the Taylor series of a function f around a as
in which a0, a1, a2 ... are unknown coefficients. What's more, from the definition of Taylor polynomials, we know that function f and Taylor series must have same derivatives of all degrees:
- , , ,
in which the terms before an vanished because their associated power of (x - a ) didn't survive taking derivatives for n times. The terms after an vanished because there are still (x - a ) terms left, which make them equal to 0 when x = a. So we are left with this simple equation, from which we can directly get:
If we agree to define 0! = 1, then this formula holds for all non-negative integers n from 0 to infinity. So we have determined the value of all unknown coefficients using derivatives of the given function f. Substitute them back into Eq. 1 to get an explicit expression of Taylor series:
or in summation notation,
This is the standard formula of Taylor series that we are going to use in the rest of this article. In most cases we would like to let a = 0 to get a neater expression:
Eq. 3 is also called Maclaurin series, named after Scottish mathematician Colin Maclaurin, who made extensive use of these series in the 18th century.
We have given some examples of Taylor series in the Basic Description section. They are easy to derive using Eq. 2 - just substitute f and a into it, then compute the derivatives. Here we are going to do this in detail for only one function: the natural log. Other elementary functions, such as sin(x), cos(x), and e x, can be treated in a similar manner.
Our natural log function is:
Its derivatives are:
- , ,
Since this function and its derivatives are not defined at x = 0, we cannot use Maclaurin series for it. Instead we can let a = 1 and compute the derivatives at this point:
- , , ,
Substitute these derivatives into Eq. 2, and we can get the Taylor series for centered at x = 1:
What's more, we can avoid the cumbersome (x - 1)k notation by introducing a new function g(x) = log (1 + x). Now we can expand it around x = 0:
The animation to the right shows this Taylor polynomial with degree n varying from 0 to 25. As we can see, the left part of this polynomial soon approximates the original function as we have expected. However, the right part demonstrates some strange behavior: it seems to diverge farther away from the function as n grows larger. This tells us that Taylor series is not always a reliable approximation of the original function. Just the fact that they have same derivatives doesn't guarantee they are the same thing. There are more requirements needed.
This leads us to the discussion of convergent and divergent sequences in the next subsection.
To converge or not to converge, this is the question
From the last example of natural log, we can see that sometimes Taylor series fail to approximate their original functions. This happens because the Taylor series for natural log is divergent when , while a valid polynomial approximation needs to be convergent. Here are the definitions of convergence and divergence:
- Let our infinite sequence be:
- and define its sum series to be:
- The sequence is said to be convergent if the following limit exists:
- If this limit doesn't exist, then the series is said to be divergent.
As we can see in the definition, whether a sequence is convergent or not depends on its sum series. If the sequence is "summable" when n goes to infinity, then its convergent. If it's not, then it's divergent. Following are some examples of convergent and divergent sequences:
- , convergent.
- , convergent.
- , divergent. Vibrates above and below 0 with increasing magnitudes.
- , divergent. Adds up to infinity.
Seq. 1 comes directly from the summation formula of geometric sequences. Seq. 2 is a famous summable sequence discovered by Leibniz. We are going to briefly explain these sequences in the following sections.
Seq. 3 and Seq. 4 are divergent because both of them add up to infinity. However, there is one important difference between them. On one hand, Seq. 3 has terms going to infinity, so it's not surprising that this one is not summable. On the other hand, Seq. 4 has terms going to zero, but they still have an infinitely large sum! This counter-intuitive result was first proved by Johann Bernoulli and Jacob Bernoulli in 17th century. In fact, this sequence is so epic in the history of math that mathematicians gave it a special name: the harmonic series. Click here for a proof of the divergence of harmonic series.
By definition, divergent series are not summable. So if we talk about the "sum" of these series, we may get ridiculous results. For example, look at the summation formula of geometric series:
This formula could be easily derived with a little manipulation of algebra, or by expanding the Maclaurin series of the left side. Click here for a simple proof. However, what we want to show here is that this formula doesn't work for all values of r. For values less than 1, such as 1/2, we can get reasonable results like:
However, if the value of r is larger than 1, such as 2, things start to get weird:
How can we get a negative number by adding a bunch of positive integers? Well, if this case makes mathematicians uncomfortable, then they are going to be even more puzzled by the following one, in which r = -2:
This is ridiculous: the sum of integers can not possibly be a fraction. In fact, we are getting all these funny results because the last two series are divergent, so their sums are not defined. See the following images for a graphic representation of these series:
In the images above, the blue lines trace the geometric sequences, and the red lines trace their sum series. As we can see, the first sequence with r = 1/2 does have a limited sum, since its sum series converge to a finite value as n increases. However, the sum series of the other two sequences don't converge to anything. They never settle around a finite value. Thus the second and third sequences diverge, and their sums don't exist. Although we can still write down the summation formula in principle, this formula is meaningless. So no wonder we have got those weird results.
Same thing happens in the Taylor series of natural log:
Let's look at an arbitrary term in this series: ±xn / n. As n increases, the denominator is experiencing a linear growth, and the numerator is experiencing an exponential growth. It is a known fact that exponential growth will eventually override linear growth, as long as the absolute value of x is larger than one. So if x > 1, then the terms xn / n will go to infinity, and this Taylor series will be divergent. This is why we saw the abnormal behavior of the right side of Figure 2-d. In this "divergent zone", although we can still write down the polynomial, it's no longer a valid approximation of the function. For example, if we want to calculate the value of log 4, instead of writing:
we have to write:
in which we saved it from the "divergent zone" to the "convergent zone" by using the identity log(a ·b ) = log (a ) + log (b ).
Why It's Interesting
As we have stated before, Taylor series can be used to derive many interesting sequences, which helped mathematicians to determine the values of important math constants such as and .
, or the ratio of a circle's circumference to its diameter, is one of the oldest, most important, and most interesting mathematical constants. The earliest documentation of can be traced back to ancient Egypt and Babylon, in which people used empirical values of such as 25/8 = 3.1250, or (16/9)2 ≈ 3.1605.
The first recorded algorithm for rigorously calculating the value of was a geometrical approach using polygons, devised around 250 BC by the Greek mathematician Archimedes. Archimedes computed upper and lower bounds of by drawing regular polygons inside and outside a circle, and calculating the perimeters of the outer and inner polygons. He proved that 223/71 < < 22/7 by using a 96-sided polygon, which gives us 2 accurate decimal digits: π ≈ 3.14.
Mathematicians continued to use this polygon method for the next 1,800 years. The more sides their polygons have, the more accurate their approximations would be. This approach peaked at around 1600, when the Dutch mathematician Ludolph van Ceulen used a 260 - sided polygon to obtain the first 35 digits of . He spent a major part of his life on this calculation. In memory of his contribution, sometimes is still called "the Ludolphine number".
However, mathematicians have had enough of trillion-sided polygons. Starting from the 17th century, they devised much better approaches for computing , using calculus rather than geometry. Mathematicians discovered numerous infinite series associated with , and the most famous one among them is the Leibniz series:
We have seen the Leibniz series as an example of convergent series in the More Mathematical Explanation section. Here we are going to briefly explain how Leibniz got this result. This amazing sequence comes directly from the Taylor series of arctan(x):
We can get Eq. 4a by directly computing the derivatives of all orders for arctan(x) at x = 0, but the calculation involved is rather complicated. There is a much easier way to do this if we notice the following fact:
Recall that we gave the summation formula of geometric series in the More Mathematical Explanation section :
If we substitute r = - x2 into the summation formula above, we can expand the right side of Eq. 4b into an infinite sequence:
So Eq. 4b changes into:
Integrating both sides gives us:
Let x = 0, this equation changes into 0 = C . So the integrating constant C vanishes, and we get Eq. 4a.
One may notice that, like Taylor series of many other functions, this series is not convergent for all values of x. It only converges for -1 ≤ x ≤ 1. Fortunately, this is just enough for us to proceed. Substituting x = 1 into it, we can get the Leibniz series:
The Leibniz series gives us a radically improved way to approximate : no polygons, no square roots, just the four basic operations. However, this particular series is not suitable for computing , since it converges too slowly. The first 1,000 terms of Leibniz series give us only two accurate digits: π ≈ 3.14. This is horribly inefficient, and no mathematicians will ever want to use this algorithm.
Fortunately, we can get series that converge much faster if we substitute smaller values of x , such as , into Eq. 4a:
which gives us:
This series is much more efficient than the Leibniz series, since there are powers of 3 in the denominators. The first 10 terms of it give us 5 accurate digits, and the first 100 terms give us 50. Leibniz himself used the first 22 terms to compute an approximation of pi correct to 11 decimal places as 3.14159265358.
However, mathematicians are still not satisfied with this efficiency. They kept substituting smaller x values into Eq. 4a to get more convergent series. Among them is Leonhard Euler, one of the greatest mathematicians in the 18th century. In his attempt to approximate , Euler discovered the following non-intuitive formula:
Although Eq. 4c looks really weird, it is indeed an equality, not an approximation. The following hidden section shows how it is derived in detail:
The next step is to expand Eq. 4c using Taylor series, which allows us to do the numeric calculations:
This series converges so fast that each term of it gives more than 1 digit of . Using this algorithm, it will not take more several days to calculate the first 35 digits of with pencil and paper, which Ludolph spent most of his life on.
Although Euler himself has never undertaken the calculation, this idea was developed and used by many other mathematicians at his time. In 1789, the Slovene mathematician Jurij Vega calculated the first 140 decimal places for of which the first 126 were correct. This record was broken in 1841, when William Rutherford calculated 208 decimal places with 152 correct ones. By the time of the invention of electronic digital computers, had been expanded to more than 500 digits. And we shouldn't forget that all of these started from the Taylor series of trigonometric functions.
Acknowledgement: Most of the historical information in this section comes from these this article: click here.
The mathematical constant , approximately equal to 2.71828, is also called Euler's Number. This important constant appears in calculus, differential equations, complex numbers, and many other branches of mathematics. What's more, it's also widely used other subjects such as physics and engineering. So we would really like to know its exact value.
Mathematically, is defined as:
In principle, we could have approximated using this definition. However, this method is so slow and inefficient that we are forced to find another one. For example, set n to 100 in the definition, and we can get:
which gives us only 2 accurate digits. This is really, really horrible accuracy for an approximating algorithm. So we have to find another way to do this.
One possible way is to use the Taylor series of function ex, which has a very nice property:
The proof of this property can be found in almost every calculus textbook. It tells us that all derivatives of the exponential function are equal:
Substitute these derivatives into Eq. 2, the general formula of Taylor Series, we can get:
Let x = 1, and we can get another way to approximate :
This sequence is strongly convergent, since there are factorials in the denominators, and factorials grow really fast as n increases. Just take the first 10 terms and we can get:
The real value of is 2.718281828··· , so we have got 7 accurate digits! Compared to the approximation by definition, which gives us only two digits at order 100, this algorithm is incredibly fast and efficient.
In fact, we can get the same conclusion if we plot the function ex and its two approximations together, and see which one converges faster. We already have the Taylor series approximation:
We can also find the powers of using the definition:
in which n' = n·x . We can switch between n' and n because both of the go to infinity, and which one we use doesn't matter.
In Figure 5-b, these two approximations are graphed together to approximate the original function ex. As we can see in the animation, Taylor series approximates the original function much faster than the definition does.
- There are currently no teaching materials for this page. Add teaching materials.
- ↑ How does the calculator find values of sine, from homeschoolmath. This is an article about calculator programs for approximating functions.
- ↑ Calculator, from Wikipedia. This article explains the structure of an electronic calculator.
- ↑ The Harmonic Series Diverges Again and Again, by Steven J. Kifowit and Terra A. Stamps. This article explains why harmonic series is divergent.
- ↑ Harmonic Series, from Wolfram MathWorld. This is a simple proof that harmonic series diverges
- ↑ Pi, from Wolfram MathWorld. This article contains some history of Pi.
- ↑ Archimedes' Approximation of Pi. This is a thorough explanation of Archimedes' method.
- ↑ Digits of Pi, by Barry Cipra. Documentation of Ludolph's work is included here.
- ↑ How Euler Did It, by Ed Sandifer. This articles talks about Euler's algorithm for estimating π.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. | http://mathforum.org/mathimages/index.php?title=Taylor_Series&oldid=34168 | 13 |
56 | NFPA 921 Sections 3-1 through 3-3.3
Chemistry of Combustion
[interFIRE VR Note: Tables and Figures have not been reproduced.]
3-1. Chemistry of Combustion. The fire investigator should have
a basic understanding of combustion principles and be able to use them to
help in interpretation of evidence at the fire scene and in the development
of conclusions regarding the origin and cause of the fire.
The body of knowledge associated with combustion and fire would easily
fill several textbooks. The discussion presented in this section should
be considered as introductory. The user of this guide is urged to consult
the technical literature for additional details.
3-1.1. Fire Tetrahedron. The combustion reaction can be characterized
by four components: the fuel, the oxidizing agent, heat, and an uninhibited
chemical chain reaction. These four components have been classically symbolized
by a four-sided solid geometric form called a tetrahedron (see Figure 3-1.1).
Fires can be prevented or suppressed by controlling or removing one or more
of the sides of the tetrahedron.
A fuel is any substance that can undergo combustion. The majority of
fuels encountered are organic and contain carbon and combinations of hydrogen
and oxygen in varying ratios. In some cases, nitrogen will be present; examples
include wood, plastics, gasoline, alcohol, and natural gas. Inorganic fuels
contain no carbon and include combustible metals, such as magnesium or sodium.
All matter can exist in one of three phases: solid, liquid, or gas. The
phase of a given material depends on the temperature and pressure and can
change as conditions vary. If cold enough, carbon dioxide, for example,
can exist as a solid (dry ice). The normal phase of a material is that which
exists at standard conditions of temperature [21°C (70°F)] and pressure
[14.7 psi (101.6 kPa) or 1 atmosphere at sea level].
Combustion of a solid or liquid fuel takes place above the fuel surface
in a region of vapors created by heating the fuel surface. The heat can
come from the ambient conditions, from the presence of an ignition source,
or from exposure to an existing fire. The application of heat causes vapors
or pyrolysis products to be released into the atmosphere where they can
burn if in the proper mixture with air and if a competent ignition source
is present. Ignition is discussed in Section 3-3.
Some solid materials can undergo a charring reaction where oxygen reacts
directly with solid material. Charring can be the initial or the final stage
of burning. Sometimes charring combustion breaks into flame; on other occasions
charring continues through the total course of events.
Gaseous fuels do not require vaporization or pyrolysis before combustion
can occur. Only the proper mixture with air and an ignition source are needed.
The form of a solid or liquid fuel is an important factor in its ignition
and burning rate. For example, a fine wood dust ignites easier and burns
faster than a block of wood. Some flammable liquids, such as diesel oil,
are difficult to ignite in a pool but can ignite readily and burn rapidly
when in the form of a fine spray or mist.
For the purposes of the following discussion, the term fuel is used to
describe vapors and gases rather than solids.
3-1.1.2.* Oxidizing Agent. In most fire situations, the oxidizing
agent is the oxygen in the earth's atmosphere. Fire can occur in the absence
of atmospheric oxygen when fuels are mixed with chemical oxidizers. Many
chemical oxidizers contain readily released oxygen. Ammonium nitrate fertilizer
(NH4NO3), potassium nitrate (KNO3), and hydrogen peroxide (H2O2) are examples.
Normal air contains 21 percent oxygen. In oxygen-enriched atmospheres,
such as in areas where medical oxygen is in use or in high-pressure diving
or medical chambers, combustion is greatly accelerated. Materials that resist
ignition or burn slowly in air can burn vigorously when additional oxygen
is present. Combustion can be initiated in atmospheres containing very low
percentages of oxygen, depending on the fuel involved. As the temperature
of the environment increases, the oxygen requirements are further reduced.
While flaming combustion can occur at concentrations as low as 14 to 16
percent oxygen in air at room temperatures of 70°F (21°C), flaming
combustion can continue at close to 0 percent oxygen under post-flashover
temperature conditions. Also, smoldering combustion once initiated can continue
in a low-oxygen environment even when the surrounding environment is at
a relatively low temperature. The hotter the environment, the less oxygen
is required. This later condition is why wood and other materials can continue
to be consumed even though the fire is in a closed compartment with low
oxygen content. Fuels that are enveloped in a layer of hot, oxygen-depleted
combustion products in the upper portion of a room can also be consumed.
It should be noted that certain gases can form flammable mixtures in
atmospheres other than air or oxygen. One example is a mixture of hydrogen
and chlorine gas.
For combustion to take place, the fuel vapor or gas and the oxidizer
should be mixed in the correct ratio. In the case of solids and liquids,
the pyrolysis products or vapors disperse from the fuel surface and mix
with the air. As the distance from the fuel source increases, the concentration
of the vapors and pyrolysis products decreases. The same process acts to
reduce the concentration of a gas as the distance from the source increases.
Fuel burns only when the fuel/air ratio is within certain limits known
as the flammable (explosive) limits. In cases where fuels can form flammable
mixtures with air, there is a minimum concentration of vapor in air below
which propagation of flame does not occur. This is called the lower flammable
limit. There is also a maximum concentration above which flame will not
propagate called the upper flammable limit. These limits are generally expressed
in terms of percentage by volume of vapor or gas in air.
The flammable limits reported are usually corrected to a temperature
of 32°F (0°C) and 1 atmosphere. Increases in temperature and pressure
result in reduced lower flammable limits possibly below 1 percent and increased
upper flammable limits. Upper limits for some fuels can approach 100 percent
at high temperatures. A decrease in temperature and pressure will have the
opposite effect. Caution should be exercised when using the values for flammability
limits found in the literature. The reported values are often based on a
single experimental apparatus that does not necessarily account for conditions
found in practice.
The range of mixtures between the lower and upper limits is called the
flammable (explosive) range. For example, the lower limit of flammability
of gasoline at ordinary temperatures and pressures is 1.4 percent, and the
upper limit is 7.6 percent. All concentrations by volume falling between
1.4 and 7.6 percent will be in the flammable (explosive) range. All other
factors being equal, the wider the flammable range, the greater the likelihood
of the mixture coming in contact with an ignition source and thus the greater
the hazard of the fuel. Acetylene, with a flammable range between 2.5 and
100 percent, and hydrogen, with a range from 4 to 75 percent, are considered
very dangerous and very likely to be ignited when released.
Every fuel/air mixture has an optimum ratio at which point the combustion
will be most efficient. This occurs at or near the mixture known by chemists
as the stoichiometric ratio. When the amount of air is in balance with the
amount of fuel (i.e., after burning there is neither unused fuel nor unused
air), the burning is referred to as stoichiometric. This condition rarely
occurs in fires except in certain types of gas fires. (See Chapter 13.)
Fires usually have either an excess of air or an excess of fuel. When
there is an excess of air, the fire is considered to be fuel controlled.
When there is more fuel present than air, a condition that occurs frequently
in well-developed room or compartment fires, the fire is considered to be
In a fuel-controlled compartment fire, all the burning will take place
within the compartment and the products of combustion will be much the same
as burning the same material in the open. In a ventilation-controlled compartment
fire, the combustion inside the compartment will be incomplete. The burning
rate will be limited by the amount of air entering the compartment. This
condition will result in unburned fuel and other products of incomplete
combustion leaving the compartment and spreading to adjacent spaces. Ventilation-controlled
fires can produce massive amounts of carbon monoxide.
If the gases immediately vent out a window or into an area where sufficient
oxygen is present, they will ignite and burn when the gases are above their
ignition temperatures. If the venting is into an area where the fire has
caused the atmosphere to be deficient in oxygen, such as a thick layer of
smoke in an adjacent room, it is likely that flame extension in that direction
will cease, although the gases can be hot enough to cause charring and extensive
3-1.1.3. Heat. The heat component of the tetrahedron represents
heat energy above the minimum level necessary to release fuel vapors and
cause ignition. Heat is commonly defined in terms of intensity or heating
rate (Btu/sec or kilowatts) or as the total heat energy received over time
(Btu or kilojoules). In a fire, heat produces fuel vapors, causes ignition,
and promotes fire growth and flame spread by maintaining a continuous cycle
of fuel production and ignition.
3-1.1.4. Uninhibited Chemical Chain Reaction. Combustion is a
complex set of chemical reactions that results in the rapid oxidation of
a fuel producing heat, light, and a variety of chemical by-products. Slow
oxidation, such as rust or the yellowing of newspaper, produces heat so
slowly that combustion does not occur. Self-sustained combustion occurs
when sufficient excess heat from the exothermic reaction radiates back to
the fuel to produce vapors and cause ignition in the absence of the original
ignition source. For a detailed discussion of ignition, see Section 3-3.
Combustion of solids can occur by two mechanisms: flaming and smoldering.
Flaming combustion takes place in the gas or vapor phase of a fuel. With
solid and liquid fuels, this is above the surface. Smoldering is a surface-burning
phenomenon with solid fuels and involves a lower rate of heat release and
no visible flame. Smoldering fires frequently make a transition to flaming
after sufficient total energy has been produced or when airflow is present
to speed up the combustion rate.
3-2. Heat Transfer. The transfer of heat is a major factor in
fires and has an effect on ignition, growth, spread, decay (reduction in
energy output), and extinction. Heat transfer is also responsible for much
of the physical evidence used by investigators in attempting to establish
a fire's origin and cause.
It is important to distinguish between heat and temperature. Temperature
is a measure that expresses the degree of molecular activity of a material
compared to a reference point such as the freezing point of water. Heat
is the energy that is needed to maintain or change the temperature of an
object. When heat energy is transferred to an object, the temperature increases.
When heat is transferred away, the temperature decreases.
In a fire situation, heat is always transferred from the high-temperature
mass to the low-temperature mass. Heat transfer is measured in terms of
energy flow per unit of time (Btu/sec or kilowatts). The greater the temperature
difference between the objects, the more energy is transferred per unit
of time and the higher the heat transfer rate is. Temperature can be compared
to the pressure in a fire hose and heat or energy transfer to the waterflow
in gallons per minute.
Heat transfer is accomplished by three mechanisms: conduction, convection,
and radiation. All three play a role in the investigation of a fire, and
an understanding of each is necessary.
3-2.1. Conduction. Conduction is the form of heat transfer that
takes place within solids when one portion of an object is heated. Energy
is transferred from the heated area to the unheated area at a rate dependent
on the difference in temperature and the physical properties of the material.
The properties are the thermal conductivity (k), the density (p),
and the heat capacity (c). The heat capacity (specific heat) of a
material is a measure of the amount of heat necessary to raise its temperature
(Btu/lb/degree of temperature rise).
If thermal conductivity (k) is high, the rate of heat transfer
through the material is high. Metals have high thermal conductivities (k),
while plastics or glass have low thermal conductivity (k) values.
Other properties (k and c) being equal, high-density (p)
materials conduct heat faster than low-density materials. This is why low-density
materials make good insulators. Similarly, materials with a high heat capacity
(c) require more energy to raise the temperature than materials with
low heat capacity values.
Generally, conduction heat transfer is considered between two points
with the energy source at a constant temperature. The other point will increase
to some steady temperature lower than that of the source. This condition
is known as steady state. Once steady state is reached, thermal conductivity
(k) is the dominant heat transfer property. In the growing stages
of a fire, temperatures are continuously changing, resulting in changing
rates of heat transfer. During this period, all three properties thermal
conductivity (k), density (p), and heat capacity (c)
play a role. Taken together, these properties are commonly called the thermal
inertia of a material and are expressed in terms of k, p,
c. Table 3-2.1 provides data for some common materials.
The impact of the thermal inertia on the rise in temperature in a space
or on the material in it is not constant through the duration of a fire.
Eventually, as the materials involved reach a constant temperature, the
effects of density (p) and heat capacity (c) become insignificant
relative to thermal conductivity. Therefore, thermal inertia of a material
is most important at the initiation and early stages of a fire (pre-flashover).
Conduction of heat into a material as it affects its surface temperature
is an important aspect of ignition. Thermal inertia is an important factor
in how fast the surface temperature will rise. The lower the thermal inertia
of the material, the faster the surface temperature will rise. Conduction
is also a mechanism of fire spread. Heat conducted through a metal wall
or along a pipe or metal beam can cause ignition of combustibles in contact
with the heated metals. Conduction through metal fasteners such as nails,
nail plates, or bolts can result in fire spread or structural failure.
3-2.2. Convection. Convection is the transfer of heat energy by
the movement of heated liquids or gases from the source of heat to a cooler
part of the environment.
Heat is transferred by convection to a solid when hot gases pass over
cooler surfaces. The rate of heat transfer to the solid is a function of
the temperature difference, the surface area exposed to the hot gas, and
the velocity of the hot gas. The higher the velocity of the gas, the greater
the rate of convective transfer.
In the early history of a fire, convection plays a major role in moving
the hot gases from the fire to the upper portions of the room of origin
and throughout the building. As the room temperatures rise with the approach
of flashover, convection continues, but the role of radiation increases
rapidly and becomes the dominant heat transfer mechanism. See 3-5.3.2 for
a discussion of the development of flashover. Even after flashover, convection
can be an important mechanism in the spread of smoke, hot gases, and unburned
fuels throughout a building. This can spread the fire or toxic or damaging
products of combustion to remote areas.
3-2.3. Radiation. Radiation is the transfer of heat energy from
a hot surface to a cooler surface by electromagnetic waves without an intervening
medium. For example, the heat energy from the sun is radiated to earth through
the vacuum of space. Radiant energy can be transferred only by line-of-sight
and will be reduced or blocked by intervening materials. Intervening materials
do not necessarily block all radiant heat. For example, radiant heat is
reduced on the order of 50 percent by some glazing materials.
The rate of radiant heat transfer is strongly related to a difference
in the fourth power of the absolute temperature of the radiator and the
target. At high temperatures, small increases in the temperature difference
result in a massive increase in the radiant energy transfer. Doubling the
absolute temperature of the hotter item without changing the temperature
of the colder item results in a 16-fold increase in radiation between the
two objects. (See Figure 3-2.3.)
The rate of heat transfer is also strongly affected by the distance between
the radiator and the target. As the distance increases, the amount of energy
falling on a unit of area falls off in a manner that is related to both
the size of the radiating source and the distance to the target.
3-3.* Ignition. In order for most materials to be ignited they
should be in a gaseous or vapor state. A few materials may burn directly
in a solid state or glowing form of combustion including some forms of carbon
and magnesium. These gases or vapors should then be present in the atmosphere
in sufficient quantity to form a flammable mixture. Liquids with flash points
below ambient temperatures do not require additional heat to produce a flammable
mixture. The fuel vapors produced should then be raised to their ignition
temperature. The time and energy required for ignition to occur is a function
of the energy of the ignition source, the thermal inertia (k, p,
c) of the fuel, and the minimum ignition energy required by that
fuel and the geometry of the fuel. If the fuel is to reach its ignition
temperature, the rate of heat transfer to the fuel should be greater than
the conduction of heat into or through the fuel and the losses due to radiation
and convection. Table 3-3 shows the temperature of selected ignition sources.
A few materials, such as cigarettes, upholstered furniture, sawdust, and
cellulosic insulation, are permeable and readily allow air infiltration.
These materials can burn as solid phase combustion, known as smoldering.
This is a flameless form of combustion whose principal heat source is char
oxidation. Smoldering is hazardous, as it produces more toxic compounds
than flaming combustion per unit mass burned, and it provides a chance for
flaming combustion from a heat source too weak to directly produce flame.
The term smoldering is sometimes inappropriately used to describe a nonflaming
response of a solid fuel to an external heat flux. Solid fuels, such as
wood, when subjected to a sufficient heat flux, will degrade, gasify, and
release vapors. There usually is little or no oxidation involved in this
gasification process, and thus it is endothermic. This is more appropriately
referred to as forced pyrolysis, and not smoldering.
3-3.1. Ignition of Solid Fuels. For solid fuels to burn with a
flame, the substance should either be melted and vaporized (like thermoplastics)
or be pyrolyzed into gases or vapors (i.e., wood or thermoset plastic).
In both examples, heat must be supplied to the fuel to generate the vapors.
High-density materials of the same generic type (woods, plastics) conduct
energy away from the area of the ignition source more rapidly than low-density
materials, which act as insulators and allow the energy to remain at the
surface. For example, given the same ignition source, oak takes longer to
ignite than a soft pine. Low-density foam plastic, on the other hand, ignites
more quickly than high-density plastic.
The amount of surface area for a given mass (surface area to mass ratio)
also affects the quantity of energy necessary for ignition. It is relatively
easy to ignite one pound of thin pine shavings with a match, while ignition
of a one-pound solid block of wood with the same match is very unlikely.
Because of the higher surface area to mass ratio, corners of combustible
materials are more easily burned than flat surfaces. Table 3-3.1 shows the
time for pilot ignition of wood exposed to varying temperatures.
Caution is needed in using Table 3-3.1, as the times and temperatures
given are for ignition with a pilot flame. These are good estimates for
ignition of wood by an existing fire. These temperatures are not to be used
to estimate the temperature necessary for the first item to ignite. The
absence of the pilot flame requires that the fuel vapors of the first item
ignited be heated to their autoignition temperature. In An Introduction
to Fire Dynamics, Dougal Drysdale reports two temperatures for wood
to autoignite or spontaneously ignite. These are heating by radiation, 600°C
(1112°F), and heating by conduction, 490°C (914°F).
For spontaneous ignition to occur as a result of radiative heat transfer,
the volatiles released from the surface should be hot enough to produce
a flammable mixture above its autoignition temperature when it mixes with
unheated air. With convective heating on the other hand, the air is already
at a high temperature and the volatiles need not be as hot.
Figure 3-3.1(a) illustrates the relationship between ignition energy
and time to ignition for thin and thick materials. When exposed to their
ignition temperature, thin materials ignite faster than thick materials
(e.g., paper vs. plywood). [See Figure 3-3.1(b).]
3-3.2. Ignition of Liquids. In order for the vapors of a liquid
to form an ignitible mixture, the liquid should be at or above its flash
point. The flash point of a liquid is the lowest temperature at which it
gives off sufficient vapor to support a momentary flame across its surface
based on an appropriate ASTM test method. The value of the flash point may
vary depending on the type of test used. Even though most of a liquid may
be slightly below its flash point, an ignition source can create a locally
heated area sufficient to result in ignition.
Atomized liquids or mists (those having a high surface area to mass ratio)
can be more easily ignited than the same liquid in the bulk form. In the
case of sprays, ignition can often occur at ambient temperatures below the
published flash point of the bulk liquid provided the liquid is heated above
its flash point and ignition temperature at the heat source.
3-3.3. Ignition of Gases. Combustible substances in the gaseous
state have extremely low mass and require the least amount of energy for
*A-3-1.1.2 For more information on flammability limits,
see USBM Flammability Characteristics of Combustible Gases and Vapors.
* A-3-3 For additional information, see Ohlemiller, Smoldering Combustion.
For more information, contact:
The NFPA Library at (617) 984-7445 or e-mail [email protected]
Taken from NFPA 921Guide for Fire and Explosion Investigations
1998 Edition, copyright © National Fire Protection Association,
1998. This material is not the complete and official position of the NFPA
on the referenced subject, which is represented only by the standard in
Used by permission. | http://www.interfire.org/res_file/9213-1.asp | 13 |
53 | Are you loving this? Not loving this? Please consider taking a moment to share your feedback with us. Thanks!
Density: Sink and Float for Solids
- The density of an object determines whether it will float or sink in another substance.
- An object will float if it is less dense than the liquid it is placed in.
- An object will sink if it is more dense than the liquid it is placed in.
Students will investigate a wax candle and a piece of clay to understand why the candle floats and the clay sinks even though the candle is heavier than the piece of clay. Students will discover that it is not the weight of the object, but its density compared to the density of water, that determines whether an object will sink or float in water.
Students will be able to determine whether an object will sink or float by comparing its density to the density of water.
Download the student activity sheet, and distribute one per student when specified in the activity. The activity sheet will serve as the “Evaluate” component of each 5-E lesson plan.
Make sure you and your students wear properly fitting goggles.
Materials for Each Group
- 2 tea light candles in their metal containers
- Water in cup
- Small balance
Notes About the Materials
A simple balance is required for the demonstration. One of the least expensive is Delta Education, Stackable Balance (21-inch) Product # 020-0452-595. Students can use the smaller version of the same balance, Delta Education, Primary Balance (12-inch), Product #WW020-0452. You will need tea light candles for the demonstration and for each student group. Look for candles in which the wax completely fills the metal container.
Do a demonstration to show that the wax is heavier than the clay but that the wax floats and the clay sinks.
Materials for the demonstration
- 1 tea light candle
- Clear plastic container
- Large balance
- Use a small enough piece of clay so that you are sure that the candle weighs more than the clay.
- Pour water into a clear plastic container (or large cup) until it is about ½-full.
- Place a piece of clay that weighs less than a tea light candle on one end of a balance.
- Remove the candle from its metal container and place the candle on the other end of the balance.
Ask students which is heavier, the clay or the candle. Ask them to predict which will sink and which will float. Then, place the clay and candle in a clear container of water.
Even though the candle weighs more than the clay, the candle floats and the clay sinks.
Have students compare the density of water, wax, and clay.
Question to investigate
Why does a heavier candle float and a lighter piece of clay sink?
Materials for each group
- 2 tea light candles in their metal containers
- Water in cup
- Small balance
Compare the density of wax and water
- Roll two pieces of tape and stick them to the center of the pan at each end of the balance.
- Attach each tea light candle to the tape so that each candle is in the center of the pan.
- Use the wick to pull one candle out of its container.
Carefully pour water into the empty metal container until it fills the container to the same level as the candle in the other container. You may use a dropper to add the last bit of water and prevent spilling. The goal is to compare the mass of equal volumes of wax and water.
The water has a greater mass than an equal volume of wax. So, the density of water must be greater than the density of wax.
- Which weighs more, wax or an equal volume of water?
- Water weighs more than an equal volume of wax.
- Which is more dense, wax or water?
- Water is more dense.
If students have trouble understanding this relationship between the mass and density of equal volumes, have them think about the demonstration from Chapter 3, Lesson 1 with the aluminum and copper cubes. Both had the same volume, but the copper cube weighed more. Because the copper had more mass, it also had a greater density.
Compare the density of clay and water
- Make sure you have one piece of tape in the center of each pan on the balance.
Fill one container with clay and place it on the tape so that it is in the center of the pan.
- Place an empty container on the tape at the opposite end of the balance.
- Slowly and carefully add water to the empty container until it is full.
The clay has a greater mass than an equal volume of water. So, the density of clay is greater than the density of water.
- Which weighs more, the clay or an equal volume of water?
- The clay weighs more than an equal volume of water.
- Which is more dense, clay or water?
- Clay is more dense.
- Knowing the density of an object can help you predict if it will sink or float in water.If an object is more dense than water, would you expect it to sink or float?
- Objects that are more dense than water sink.
- If an object is less dense than water, would you expect it to sink or float?
- Objects that are less dense than water float.
Compare the density of wax, water, and clay on the molecular level.
Wax is made of carbon and hydrogen atoms connected together in long chains. These long chains are tangled and intertwined and packed together to make the wax.
Even though they both have lots of hydrogen atoms, water is more dense than wax because the oxygen in water is heavier and smaller than the carbon in the wax. Also, the long chains of the wax do not pack as efficiently as the small water molecules.
Clay has oxygen atoms like water, but it also has heavier atoms like silicon and aluminum. The oxygen atoms are bonded to the silicon and aluminum to make molecules with a lot of mass. These are packed closely together, which makes the clay more dense than water.
Have students explain, in terms of density, why a very heavy object like a big log floats and why a very light object like a tiny grain of sand sinks.
- A giant log can float on a lake, while a tiny grain of sand sinks to the bottom. Explain why a heavy object like the log floats while a very light grain of sand sinks.
- Students should recognize that a log will float because wood is less dense than water. If you could weigh a large amount of water that has the same volume as the log, the log will weigh less than the water. Therefore, the log floats. A grain of sand will sink because sand is more dense than water. If you could weigh a small amount of water that has the same volume as the grain of sand, the sand will weigh more than the water. Therefore, the sand sinks.
Students should realize that if an object weighs more than an equal volume of water, it is more dense and will sink, and if it weighs less than an equal volume of water, it is less dense and will float.
Remember that the density of water is about 1 g/cm3. Predict whether the following objects will sink or float.
Table 1. Buoyancy of several materials. Object Density (g/cm3) Sink or Float Cork 0.2–0.3 Float Anchor 7.8 Sink Spruce wood oar 0.4 Float Apple 0.9 Float Orange 0.84 Float Orange without peel 1.16 Sink
- If a peach has a volume of 130 cm3 and sinks in water, what can you say about its mass?
- Its mass must be more than 130 grams.
- If a banana has a mass of 150 grams and floats in water, what can you say about its volume?
- Its volume must be more than 150 cm3.
Note: Students may wonder why boats made out of dense material like steel can be made to float. This is a good question and there are several ways of answering it. A key to understanding this phenomenon is that the density of the material and the density of an object made of that material are not necessarily the same. If a solid ball or cube of steel is placed in water, it sinks. But if that same steel is pounded and flattened thin and formed into a big bowl-like shape, the overall volume of the bowl is much greater than the volume of the steel cube. The mass of the steel is the same but the big increase in volume makes the density of the bowl less than the density of water so the bowl floats. This is the same reason why a steel ship is able to float. The material is shaped in such a way so that the density of the ship is less than the density of water. | http://www.middleschoolchemistry.com/lessonplans/chapter3/lesson4 | 13 |
123 | In mathematics and computer science, hexadecimal (also base-16, hexa, or hex) is a numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 0–9 to represent values zero to nine, and A, B, C, D, E, F (or a through f) to represent values ten to fifteen.
Its primary use is as a human friendly representation of binary coded values, so it is often used in digital electronics and computer engineering. Since each hexadecimal digit represents four binary digits (bits)—also called a nibble—it is a compact and easily translated shorthand to express values in base two.
In digital computing, hexadecimal is primarily used to represent bytes. Attempts to represent the 256 possible byte values by other means have led to problems. Directly representing each possible byte value with a single character representation runs into unprintablecontrol characters in the ASCII character set. Even if a standard set of printable characters were devised for every byte value, neither users nor input hardware are equipped to handle 256 unique characters. Most hex editing software displays each byte as a single character, but unprintable characters are usually substituted with period or blank.
In URLs, all characters can be coded using hexadecimal. Each 2-digit (1 byte) hexadecimal sequence is preceded by a percent sign. For example, the URL http://en.wikipedia.org/wiki/Main%20Page substitutes a space (which is not allowed in URLs) with the hex code for a space (%20).
In situations where there is no context, a hexadecimal number might be ambiguous and confused with numbers expressed in other bases. There are several conventions for unambiguously expressing values. In mathematics, a subscript is often used on each number explicitly giving the base: 15910 is decimal 159; 15916 is hexadecimal 159 which is equal to 34510. Some authors prefer a text subscript, such as 159decimal and 159hex.
In linear text systems, such as those used in most computer programming environments, a variety of methods have arisen:
In URLs, character codes are written as hexadecimal pairs prefixed with %: http://www.example.com/name%20with%20spaces where %20 is the space (blank) character, code 20 hex, or 32 decimal.
In XML and XHTML, characters can be expressed as hexadecimal using the notation . Color references are expressed in hex prefixed with #: #FFFFFF which gives white.
The C programming language (and its syntactical descendants) use the prefix 0x: 0x5A3 Character and string constants may express character codes in hexadecimal with the prefix x followed by two hex digits: 'x1B' (specifies the Esc control character), "x1B[0mx1B[25;1H" is a string containing 11 characters (not including an implied trailing NUL). To output a value as hexadecimal with the printf function family, the format conversion code %X or %x is used.
In the Unicode standard, a character value is represented with U+ followed by the hex value: U+20AC is the Euro sign (€).
MIME (e-mail extensions) quoted-printable characters by code inside a text/plain MIME-part body prefix non-printable ASCII characters with an equal to sign =, as in Espa=D1a to send "España" (Spain).
In Intel-derived assembly languages, hexadecimal is indicated with a suffixed H or h: FFh or 0A3CH. Some implementations require a leading zero when the first character is not a digit: 0FFh
Notations such as X'5A3' are sometimes seen, such as in PL/I. This is the most common format for hexadecimal on IBM mainframes (zSeries) and midrange computers (iSeries) running traditional OS's (zOS, zVSE, zVM, TPF, OS/400), and is used in Assembler, PL/1, Cobol, JCL, scripts, commands and other places. This format was common on other (and now obsolete) IBM systems as well.
Donald Knuth introduced the use of particular typeface to represent a particular radix in his book The TeXbook. There, hexadecimal representations are written in a typewriter typeface: 5A3
There is no universal convention to use lowercase or uppercase for the letter digits, and each is prevalent or preferred by particular environments by community standards or convention.
The choice of the letters A through F to represent the digits above nine was not universal in the early history of computers. During the 1950s, some installations favored using the digits 0 through 5 with a macron character ("¯") to indicate the values 10-15. Users of Bendix G-15 computers used the letters U through Z. Bruce A. Martin of Brookhaven National Laboratory considered the choice of A-F "ridiculous" and in 1968 proposed in a letter to the editor of the ACM an entirely new set of symbols based on the bit locations, which did not gain much acceptance.
Not only are there no digits to represent the quantities from ten to fifteen—so letters are used as a substitute—but most Western European languages also lack a nomenclature to name hexadecimal numbers. "Thirteen" and "fourteen" are decimal-based, and even though English has names for several non-decimal powers: pair for the first binary power; score for the first vigesimal power; dozen, gross, and great gross for the first three duodecimal powers. However, no English name describes the hexadecimal powers (corresponding to the decimal values 16, 256, 4096, 65536, ...). Some people read hexadecimal numbers digit by digit like a phone number: 4DA is "four-dee-aye". However, the letter 'A' sounds similar to eight, 'C' sounds similar to three, and 'D' can easily be mistaken for the 'ty' suffix: Is it 4D or forty? Other people avoid confusion by using the NATO phonetic alphabet: 4DA is "four-delta-alpha". Similarly, some use the Joint Army/Navy Phonetic Alphabet ("four-dog-able"), or a similar ad hoc system.
The hexadecimal system can express negative numbers the same way as in decimal: –2A to represent –42 and so on.
However, some prefer instead to express the exact bit patterns used in the processor and consider hexadecimal values best handled as unsigned values. This way, the negative number –42 can be written as FFFF FFD6 in a 32-bit CPU register, as C228 0000 in a 32-bit FPU register or C045 0000 0000 0000 in a 64-bit FPU register.
As with other numeral systems, the hexadecimal system can be used to represent rational numbers, although recurring digits are common since sixteen (10h) has only a single prime factor (two):
For any base, 0.1 (or "1/10") is always equivalent to one divided by the representation of that base value in its own number system: Counting in base 3 is 0, 1, 2, 10 (three). Thus, whether dividing one by two for binary or dividing one by sixteen for hexadecimal, both of these fractions are written as 0.1. Because the radix 16 is a perfect square (4²), fractions expressed in hexadecimal have an odd period much more often than decimal ones, and there are no cyclic numbers (other than trivial single digits). Recurring digits are exhibited when the denominator in lowest terms has a prime factor not found in the radix; thus, when using hexadecimal notation, all fractions with denominators that are not a power of two result in an infinite string of recurring digits (such as thirds and fifths). This makes hexadecimal (and binary) less convenient than decimal for representing rational numbers since a larger proportion lie outside its range of finite representation.
All rational numbers finitely representable in hexadecimal are also finitely representable in decimal, duodecimal and sexagesimal: that is, any hexadecimal number with a finite number of digits has a finite number of digits when expressed in those other bases. Conversely, only a fraction of those finitely representable in the latter bases are finitely representable in hexadecimal: That is, decimal 0.1 corresponds to the infinite recurring representation 0.199999999999... in hexadecimal. However, hexadecimal is more efficient than bases 12 and 60 for representing fractions with powers of two in the denominator (e.g., decimal one sixteenth is 0.1 in hexadecimal, 0.09 in duodecimal, 0;3,45 in sexagesimal and 0.0625 in decimal).
Most computers manipulate binary data, but it is difficult for humans to work with the large number of digits for even a relatively small binary number. Although most humans are familiar with the base 10 system, it is much easier to map binary to hexadecimal than to decimal because each hexadecimal digit maps to a whole number of bits (410).
This example converts 11112 to base ten. Since each position in a binary numeral can contain either a 1 or 0, its value may be easily determined by its position from the right:
00012 = 110
00102 = 210
01002 = 410
10002 = 810
= 810 + 410 + 210 + 110
With surprisingly little practice, mapping 11112 to F16 in one step becomes easy: see table in Uses. The advantage of using hexadecimal rather than decimal increases rapidly with the size of the number. When the number becomes large, conversion to decimal is very tedious. However, when mapping to hexadecimal, it is trivial to regard the binary string as 4 digit groups and map each to a single hexadecimal digit.
This example shows the conversion of a binary number to decimal, mapping each digit to the decimal value, and adding the results.
Compare this to the conversion to hexadecimal, where each group of four digits can be considered independently, and converted directly:
The conversion from hexadecimal to binary is equally direct.
The octal system can also be useful as a tool for people who need to deal directly with binary computer data. Octal represents data as three bits per character, rather than four.
Converting from other bases
Division-remainder in source base
As with all bases there is a simple algorithm for converting a representation of a number to hexadecimal by doing integer division and remainder operations in the source base. Theoretically this is possible from any base but for most humans only decimal and for most computers only binary (which can be converted by far more efficient methods) can be easily handled with this method.
Let d be the number to represent in hexadecimal, and the series hihi-1...h2h1 be the hexadecimal digits representing the number.
i := 1
hi := d mod 16
d := (d-hi) / 16
If d = 0 (return series hi) else increment i and go to step 2
"16" may be replaced with any other base that may be desired.
Addition and multiplication
It is also possible to make the conversion by assigning each place in the source base the hexadecimal representation of its place value and then performing multiplication and addition to get the final representation.
I.e. to convert the number B3AD to decimal one can split the conversion into D (1310), A (1010), 3 (310) and B (1110) then get the final result by
multiplying each decimal representation by 16p, where 'p' is the corresponding position from right to left, beginning with 0. In this case we have 13*(160) + 10*(161) + 3*(162) + 11*(163), which is equal 45997 in the decimal system.
Conversion via binary
As most computers work in binary, the normal way for a computer to make such a conversion would be to convert to binary first (by doing multiplication and addition in binary) and then make use of the direct mapping from binary to hexadecimal.
Tools for conversion
Most modern computer systems with graphical user interfaces provide a built-in calculator utility, capable of performing conversions between various radixes, generally including hexadecimal.
In MicrosoftWindows, the Calculator utility can be set to scientific calculator mode, which allows conversions between radix 16 (hexadecimal), 10 (decimal), 8 (octal) and 2 (binary); the bases most commonly used by programmers. In Scientific Mode, the on screen numeric keypad includes the hexadecimal digits A through F which are active when "Hex" is selected. The Windows Calculator however only supports integers.
The word "hexadecimal" is strange in that hexa is derived from the Greek έξ (hex) for "six" and decimal is derived from the Latin for "tenth". It may have been derived from the Latin root, but Greek deka is so similar to the Latin decem that some would not consider this nomenclature inconsistent. However, the word "sexagesimal" (base 60) retains the Latin prefix. The earlier Bendix documentation used the term "sexadecimal". Donald Knuth has pointed out that the etymologically correct term is "senidenary", from the Latin term for "grouped by 16". (The terms "binary", "ternary" and "quaternary" are from the same Latin construction, and the etymologically correct term for "decimal" arithmetic is "denary".) Schwartzman notes that the pure expectation from the form of usual Latin-type phrasing would be "sexadecimal", but then computer hackers would be tempted to shorten the word to "sex". Incidentally, the etymologically proper Greek term would be hexadecadic (although in Modern Greekdeca-hexadic (δεκαεξαδικός) is more commonly used).
Common patterns and humor
Hexadecimal is sometimes used in programmer jokes because certain words can be formed using only hexadecimal digits. Some of these words are "dead", "beef", "babe", and with appropriate substitutions "c0ffee". Since these are quickly recognizable by programmers, debugging setups sometimes initialize memory to them to help programmers see when something has not been initialized.
Some people add an H after a number if they want to show that it is written in hexadecimal. In older Intel assembly syntax, this is sometimes the case.
"Hexspeak" may be the forerunner of the modern web parlance of "1337speak"
An example is the magic number in FAT Mach-O files and javaclass file structure, which is "CAFEBABE". Single-architecture Mach-O files have the magic number "FEEDFACE" at their beginning. "DEADBEEF" is sometimes put into uninitialized memory. Microsoft Windows XP clears its locked index.dat files with the hex codes: "0BADF00D".
Two common bit patterns often employed to test hardware are 01010101 and 10101010 (their corresponding hex values are 55h and AAh, respectively). The reason for their use is to alternate between off ('0') to on ('1') or vice versa when switching between these two patterns. These two values are often used together as signatures in critical PC system sectors (e.g., the hex word, 0xAA55 which on little-endian systems is 55h followed by AAh, must be at the end of a valid Master Boot Record).
The following table shows a joke in hexadecimal:
The first three are interpreted as multiplication, but in the last, "0x" signals Hexadecimal interpretation of 12, which is 18.
Another joke based on the use of a word containing only letters from the first six in the alphabet (and thus those used in hexadecimal) is...
If only DEAD people understand hexadecimal, how many people understand hexadecimal?
In this case, DEAD refers to a hexadecimal number (57005 base 10), not the state of being no longer alive. Obviously, DEAD normally should not be written in all-caps (as in the preceding) as it makes it stand out, thus ruining the riddle.
There have been occasional attempts to promote hexadecimal as the preferred numeral system. These attempts usually
propose pronunciation and/or symbology. Sometimes the proposal unifies standard
measures so that they are multiples of 16.
An example of unifying standard measures is Hexadecimal time which subdivides a day by 16 so that there are 16 "hexhours" in a day. | http://www.reference.com/browse/Unprintable+characters | 13 |
130 | Nuclear weapon designThe first nuclear weapons, though large, cumbersome and inefficient, provided the basic design building blocks of all future weapons. Here the Gadget device is prepared for the first nuclear test: Trinity.
Nuclear weapon designs are physical, chemical, and engineering arrangements that cause the physics package of a nuclear weapon to detonate. There are three basic design types. In all three, the explosive energy is derived primarily from nuclear fission, not fusion.
- Pure fission weapons were the first nuclear weapons built and the only type
ever used in warfare. The active material is fissile uranium (U-235)
or plutonium (Pu-239), explosively assembled into a chain-reacting critical
mass by one of two methods:
- Gun assembly, in which one piece of fissile uranium is fired down a gun barrel to a fissile uranium target at the end of the barrel (plutonium can be used in this design, but it has proven to be impractical), or
- Implosion, in which a fissile mass of either material (U-235, Pu-239, or a combination) is surrounded by high explosives that compress the mass, resulting in criticality.
- Fusion-boosted fission weapons improve on the implosion design. The high temperature and pressure environment at the center of an exploding fission weapon compresses and heats a mixture of tritium and deuterium gas (heavy isotopes of hydrogen). The hydrogen fuses to form helium and free neutrons. The energy release from fusion reactions is relatively negligible, but each neutron starts a new fission chain reaction, greatly reducing the amount of fissile material that would otherwise be wasted. Boosting can more than double the weapon's fission energy release.
- Two-stage thermonuclear weapons are essentially a daisy chain of fusion-boosted fission weapons, with only two daisies, or stages, in the chain. The second stage, called the "secondary," is imploded by x-ray energy from the first stage, called the "primary." This radiation implosion is much more effective than the high-explosive implosion of the primary. Consequently, the secondary can be many times more powerful than the primary, without being bigger. The secondary could be designed to maximize fusion energy release, but in most designs fusion is employed only to drive or enhance fission, as it is in the primary. More stages could be added, but the result would be a multi-megaton weapon too powerful to be useful. (The United States briefly deployed a three-stage 25-megaton bomb, the B41, starting in 1961. Also in 1961, the Soviet Union tested, but did not deploy, a three-stage 50-megaton device, Tsar Bomba.)
Pure fission weapons are always the first type to be built by a nation state, and, if such a thing should happen, would be the type built by a non-state terrorist organization,. Large industrial states with well-developed nuclear arsenals have two-stage thermonuclear weapons, which are the most compact, scalable, and cost effective option once the necessary industrial infrastructure is built.
All innovations in nuclear weapon design originated in the United States; the following descriptions feature U.S. designs.
In early news accounts, pure fission weapons were called atomic bombs or A-bombs, a misnomer since the energy comes only from the nucleus of the atom. Weapons involving fusion were called hydrogen bombs or H-bombs, also a misnomer since their destructive energy comes mostly from fission. Insiders favored the terms nuclear and thermonuclear, respectively.
The term thermonuclear refers to the high temperatures required to initiate fusion. It ignores the equally important factor of pressure, which was considered secret at the time the term became current. Many nuclear weapon terms are similarly inaccurate because of their origin in a classified environment. Some are nonsense code words such as "alarm clock" (see below).Nuclear weapons
History of nuclear weapons
Nuclear arms race
Nuclear weapon design
Nuclear testing (Underground)
Effects of nuclear explosions
Proliferation / Arsenals
- 1 Nuclear reactions
- 2 Pure fission weapons
- 3 Fusion-boosted fission weapons
- 4 Two-stage thermonuclear weapons
- 5 Specific designs
- 6 The Weapon Design Laboratories
- 7 Explosive testing
- 8 Production facilities
- 9 Warhead design safety
- 10 References
- 11 External links
Nuclear fission splits the heaviest of atoms to form lighter atoms. Nuclear fusion bonds together the lightest atoms to form heavier atoms. Both reactions generate roughly a million times more energy than comparable chemical reactions, making nuclear bombs a million times more powerful than non-nuclear bombs.
In some ways, fission and fusion are opposite and complementary reactions, but the particulars are unique for each. To understand how nuclear weapons are designed, it is useful to know the important similarities and differences between fission and fusion. The following explanation uses rounded numbers and approximations.
Fission can be self-sustaining because fission produces more neutrons of the speed required to cause new fissions. When a free neutron hits the nucleus of a fissionable atom like uranium-235 ( 235U), the uranium splits into two smaller atoms called fission fragments, plus more neutrons.
The uranium atom can split any one of dozens of different ways, as long as the atomic weights add up to 236 (uranium plus the extra neutron). The following equation shows one possible split, namely into strontium-95 ( 95Sr), xenon-139 ( 139Xe), and two neutrons (n), plus energy:
The immediate energy release per atom is 180 million electron volts (MeV), i.e. 74 TJ/kg, of which 90% is kinetic energy (or motion) of the fission fragments, flying away from each other mutually repelled by the positive charge of their protons (38 for strontium, 54 for xenon). Thus their initial kinetic energy is 67 TJ/kg, hence their initial speed is 12,000 kilometers per second, but their high electric charge causes many inelastic collisions with nearby nuclei. The fragments remain trapped inside the bomb's uranium pit until their motion is converted into x-ray heat, a process which takes about a millionth of a second (a microsecond).
This x-ray energy produces the blast and fire which are the purpose of a nuclear explosion.
After the fission products slow down, they remain radioactive. Being new elements with too many neutrons, they eventually become stable by means of beta decay, converting neutrons into protons by throwing off electrons and gamma rays. Each fission product nucleus decays between one and six times, average three times, producing radioactive elements with half-lives up to 200,000 years. In reactors, these products are the nuclear waste in spent fuel. In bombs, they become radioactive fallout, both local and global.
Meanwhile, inside the exploding bomb, the free neutrons released by fission strike nearby U-235 nuclei causing them to fission in an exponentially growing chain reaction (1, 2, 4, 8, 16, etc.). Starting from one, the number of fissions can theoretically double a hundred times in a microsecond, which could consume all uranium up to hundreds of tons by the hundredth link in the chain. In practice, bombs do not contain that much uranium, and, anyway, just a few kilograms undergo fission before the uranium blows itself apart.
Holding an exploding bomb together is the greatest challenge of fission weapon design. The heat of fission rapidly expands the uranium pit, spreading apart the target nuclei and making space for the neutrons to escape without being captured. The chain reaction stops.
Materials which can sustain a chain reaction are called fissile. The two fissile materials used in nuclear weapons are: U-235, also known as highly enriched uranium (HEU), oralloy (Oy) meaning Oak Ridge Alloy, or 25 (the last digits of the atomic number, which is 92 for uranium, and the atomic weight, here 235, respectively); and Pu-239, also known as plutonium, or 49 (from 94 and 239).
Uranium's most common isotope, U-238, is fissionable but not fissile. Its aliases include natural or unenriched uranium, depleted uranium (DU), tubealloy (Tu), and 28. It cannot sustain a chain reaction, because its own fission neutrons are not powerful enough to cause more U-238 fission. However, the neutrons released by fusion will fission U-238. This reaction produces most of the energy in a typical two-stage thermonuclear weapon.
Fusion cannot be self-sustaining because it does not produce the heat and pressure necessary for more fusion. It produces neutrons which run away with the energy. In weapons, the most important fusion reaction is called the D-T reaction. Using the heat and pressure of fission, hydrogen-2, or deuterium ( 2D), fuses with hydrogen-3, or tritium ( 3T), to form helium-4 ( 4He) plus one neutron (n) and energy:
Notice that the total energy output, 17.6 MeV, is one tenth of that with fission, but the ingredients are almost one-fiftieth as massive, so the energy output per kilo is greater. However, in this fusion reaction 80% of the energy, or 14 MeV, is in the motion of the neutron which, having no electric charge and being almost as massive as the hydrogen nuclei that created it, can escape the scene without leaving its energy behind to help sustain the reaction – or to generate x-rays for blast and fire.
The only practical way to capture most of the fusion energy is to trap the neutrons inside a massive bottle of heavy material such as lead, uranium, or plutonium. If the 14 MeV neutron is captured by uranium (either type: 235 or 238) or plutonium, the result is fission and the release of 180 MeV of fission energy, which will produce the heat and pressure necessary to sustain fusion, in addition to multiplying the energy output tenfold.
Fission is thus necessary to start fusion, to sustain fusion, and to optimize the extraction of useful energy from fusion (by making more fission). In the case of a neutron bomb, see below, the last-mentioned does not apply since the escape of neutrons is the objective.
A third important nuclear reaction is the one that creates tritium, essential to the type of fusion used in weapons and, incidentally, the most expensive ingredient in any nuclear weapon. Tritium, or hydrogen-3, is made by bombarding lithium-6 ( 6Li) with a neutron (n) to produce helium-4 ( 4He) plus tritium ( 3T) and energy:
A nuclear reactor is necessary to provide the neutrons. The industrial-scale conversion of lithium-6 to tritium is very similar to the conversion of uranium-238 into plutonium-239. In both cases the feed material is placed inside a nuclear reactor and removed for processing after a period of time. In the 1950s, when reactor capacity was limited, for the production of every atom of tritium the production of an atom of plutonium had to be dispensed with.
The fission of one plutonium atom releases ten times more total energy than the fusion of one tritium atom, and it generates fifty times more blast and fire. For this reason, tritium is included in nuclear weapon components only when it causes more fission than its production sacrifices, namely in the case of fusion-boosted fission.
However, an exploding nuclear bomb is a nuclear reactor. The above reaction can take place simultaneously throughout the secondary of a two-stage thermonuclear weapon, producing tritium in place as the device explodes.
Of the three basic types of nuclear weapon, the first, pure fission, uses the first of the three nuclear reactions above. The second, fusion-boosted fission, uses the first two. The third, two-stage thermonuclear, uses all three.
Pure fission weapons
The first task of a nuclear weapon design is to rapidly assemble, at the time of detonation, more than one critical mass of fissile uranium or plutonium. A critical mass is one in which the percentage of fission-produced neutrons which are captured and cause more fission is large enough to perpetuate the fission and prevent it from dying out.
Once the critical mass is assembled, at maximum density, a burst of neutrons is supplied to start as many chain reactions as possible. Early weapons used an "urchin" inside the pit containing non-touching interior surfaces of polonium-210 and beryllium. Implosion of the pit crushed the urchin, bringing the two metals in contact to produce free neutrons. In modern weapons, the neutron generator is a high-voltage vacuum tube containing a particle accelerator which bombards a deuterium/tritium-metal hydride target with deuterium and tritium ions. The resulting small-scale fusion produces neutrons at a protected location outside the physics package, from which they penetrate the pit. This method allows better control of the timing of chain reaction initiation.
The critical mass of an uncompressed sphere of bare metal is 110 lb (50 kg) for uranium-235 and 35 lb (16 kg) for delta-phase plutonium-239. In practical applications, the amount of material required for critical mass is modified by shape, purity, density, and the proximity to neutron-reflecting material, all of which affect the escape or capture of neutrons.
To avoid a chain reaction during handling, the fissile material in the weapon must be sub-critical before detonation. It may consist of one or more components containing less than one uncompressed critical mass each. A thin hollow shell can have more than the bare-sphere critical mass, as can a cylinder, which can be arbitrarily long without ever reaching critical mass.
A tamper is an optional layer of dense material surrounding the fissile material. Due to its inertia it delays the expansion of the reacting material, increasing the efficiency of the weapon. Often the same layer serves both as tamper and as neutron reflector.
Gun-type assembly weaponDiagram of a gun-type fission weapon
- Main article: Gun-type fission weapon
Little Boy, the Hiroshima bomb, used 140 lb (64 kg) of Uranium with an average enrichment of around 80%, or 112 lb (51 kg) of U-235, just about the bare-metal critical mass. (See Little Boy article for a detailed drawing.) When assembled inside its tamper/reflector of tungsten carbide, the 140 lb was more than twice critical mass. Before detonation, it was separated into two sub-critical pieces, one of which was later fired down a gun barrel at the other. About 1% of the uranium underwent fission; the remainder, representing 98% of the entire wartime output of the giant factories at Oak Ridge, scattered uselessly.
The inefficiency was caused by the speed with which the uncompressed fissioning uranium expanded and became sub-critical by virtue of decreased density. Despite its inefficiency, this design, because of its shape, was adapted for use in small-diameter, cylindrical artillery shells (a gun-type warhead fired from the barrel of a much larger gun). Such warheads were deployed by the U.S. until 1992, accounting for a significant fraction of the U-235 in the arsenal.
Implosion type weapon
Fat Man, the Nagasaki bomb, used 13.6 lb (6.2 kg) of Pu-239, which is only 39% of bare-metal critical mass. (See Fat Man article for a detailed drawing.) The U-238 reflected, 13.6 lb pit was sub-critical before detonation. During detonation, criticality was achieved by implosion. The plutonium pit was squeezed to increase its density by simultaneous detonation of conventional explosives placed uniformly around the pit. The explosives were detonated by multiple exploding-bridgewire detonators. It is estimated that only about 20% of the plutonium underwent fission, the rest (about 11 lbs) was scattered.
An implosion shock wave might be of such short duration that only a fraction of the pit is compressed at any instant as the wave passes through it. A pusher shell made out of low density metal—such as aluminum, beryllium, or an alloy of the two metals (aluminum being easier and safer to shape and beryllium for its high-neutron-reflective capability) —may be needed. The pusher is located between the explosive lens and the tamper. It works by reflecting some of the shock wave backwards, thereby having the effect of lengthening its duration. Fat Man used an aluminum pusher.
The key to Fat Man's greater efficiency was the inward momentum of the massive U-238 tamper (which did not undergo fission). Once the chain reaction started in the plutonium, the momentum of the implosion had to be reversed before expansion could stop the fission. By holding everything together for a few hundred nanoseconds more, the efficiency was increased.
The core of an implosion weapon – the fissile material and any reflector or tamper bonded to it – is known as the pit. Some weapons tested during the 1950s used pits made with U-235 alone, or in composite with plutonium, but all-plutonium pits are the smallest in diameter and have been the standard since the early 1960s.
Casting and then machining plutonium is difficult not only because of its toxicity, but also because plutonium has many different metallic phases, also known as allotropes. As plutonium cools, changes in phase result in distortion. This distortion is normally overcome by alloying it with 3–3.5 molar% (0.9–1.0% by weight) gallium which causes it to take up its delta phase over a wide temperature range. When cooling from molten it then suffers only a single phase change, from epsilon to delta, instead of the four changes it would otherwise pass through. Other trivalent metals would also work, but gallium has a small neutron absorption cross section and helps protect the plutonium against corrosion. A drawback is that gallium compounds themselves are corrosive and so if the plutonium is recovered from dismantled weapons for conversion to plutonium dioxide for power reactors, there is the difficulty of removing the gallium.
Because plutonium is chemically reactive and toxic if inhaled or enters the body by any other means, for protection of the assembler, it is common to plate the completed pit with a thin layer of inert metal. In the first weapons, nickel was used but gold is now preferred.
The first improvement on the Fat Man design was to put an air space between the tamper and the pit to create a hammer-on-nail impact. The pit, sitting on a hollow cone inside the tamper cavity, was said to be levitated. The three tests of Operation Sandstone, in 1948, used Fat Man designs with levitated pits. The largest yield was 49 kilotons, more than twice the yield of the unlevitated Fat Man.
It was immediately clear that implosion was the best design for a fission weapon. Its only drawback seemed to be its diameter. Fat Man was 5 feet wide vs 2 feet for Little Boy.
Eleven years later, implosion designs had advanced sufficiently that the 5 foot-diameter sphere of Fat Man had been reduced to a 1 foot-diameter cylinder 2 feet long, the Swan device.
The Pu-239 pit of Fat Man was only 3.6 inches in diameter, the size of a softball. The bulk of Fat Man's girth was the implosion mechanism, namely concentric layers of U-238, aluminum, and high explosives. The key to reducing that girth was the two-point implosion design.
Two-point linear implosion
A very inefficient implosion design is one that simply reshapes an ovoid into a sphere, with minimal compression. In linear implosion, an untamped, solid, elongated mass of Pu-239, larger than critical mass in a sphere, is imbedded inside a cylinder of high explosive with a detonator at each end.
Detonation makes the pit critical by driving the ends inward, creating a spherical shape. The shock may also change plutonium from delta to alpha phase, increasing its density by 23%, but without the inward momentum of a true implosion. The lack of compression makes it inefficient, but the simplicity and small diameter make it suitable for use in artillery shells and atomic demolition munitions - ADMs - also known as backpack or suitcase nukes.
All such low-yield battlefield weapons, whether gun-type U-235 designs or linear implosion Pu-239 designs, pay a high price in fissile material in order to achieve diameters between six and ten inches.
Two-point hollow-pit implosion
A more efficient two-point implosion system uses two high explosive lenses and a hollow pit.
A hollow plutonium pit was the original plan for the 1945 Fat Man bomb, but there was not enough time to develop and test the implosion system for it. A simpler solid-pit design was considered more reliable, given the time restraint, but it required a heavy U-238 tamper, a thick aluminum pusher, and three tons of high explosives.
After the war, interest in the hollow pit design was revived. Its obvious advantage is that a hollow shell of plutonium, shock-deformed and driven inward toward its empty center, would carry momentum into its violent assembly as a solid sphere. It would be self-tamping, requiring a smaller U-238 tamper, no aluminum pusher, and less high explosive. The hollow pit made levitation obsolete.
The Fat Man bomb had two concentric, spherical shells of high explosives, each about 10 inches thick. The inner shell drove the implosion. The outer shell consisted of a soccer-ball pattern of 32 high explosive lenses, each of which converted the convex wave from its detonator into a concave wave matching the contour of the outer surface of the inner shell. If these 32 lenses could be replaced with only two, the high explosive sphere could become an ellipsoid (prolate spheroid) with a much smaller diameter.
The best illustration of these two features is a 1956 drawing from the Swedish nuclear bomb program. The program was terminated before it produced a test explosion. The drawing shows the essential elements of the two-point hollow-pit design.
There are similar drawings in the open literature that come from the post-war German nuclear bomb program, which was also terminated, and from the French program, which produced an arsenal.
The mechanism of the high explosive lens (diagram item #6) is not shown in the Swedish drawing, but a standard lens made of fast and slow high explosives, as in Fat Man, would be much longer than the shape depicted. For a single high explosive lens to generate a concave wave that envelops an entire hemisphere, it must either be very long or the part of the wave on a direct line from the detonator to the pit must be slowed dramatically.
A slow high explosive is too fast, but the flying plate of an "air lens" is not. A metal plate, shock-deformed, and pushed across an empty space can be designed to move slowly enough. A two-point implosion system using air lens technology can have a length no more than twice its diameter, as in the Swedish diagram above.
Fusion-boosted fission weapons
- Main article: Boosted fission weapon
The next step in miniaturization was to speed up the fissioning of the pit to reduce the amount of time inertial confinement needed. The hollow pit provided an ideal location to introduce fusion for the boosting of fission. A 50-50 mixture of tritium and deuterium gas, pumped into the pit during arming, will fuse into helium and release free neutrons soon after fission begins. The neutrons will start a large number of new chain reactions while the pit is still critical.
Once the hollow pit is perfected, there is little reason not to boost.
Boosting reduces diameter in three ways, all the result of faster fission:
- Since the compressed pit does not need to be held together as long, the massive U-238 tamper can be replaced by a light-weight beryllium shell (to reflect escaping neutrons back into the pit). The diameter is reduced.
- The mass of the pit can be reduced by half, without reducing yield. Diameter is reduced again.
- Since the mass of the metal being imploded (tamper plus pit) is reduced, a smaller charge of high explosive is needed, reducing diameter even further.
Since boosting is required to attain full design yield, any reduction in boosting reduces yield. Boosted weapons are thus variable-yield weapons. Yield can be reduced any time before detonation, simply by putting less than the full amount of tritium into the pit during the arming procedure.
The first device whose dimensions suggest employment of all these features (two-point, hollow-pit, fusion-boosted implosion) was the Swan device, tested June 22, 1956, as the Inca shot of Operation Redwing, at Eniwetok. Its yield was 15 kilotons, about the same as Little Boy, the Hiroshima bomb. It weighed 105 lb (47.6 kg) and was cylindrical in shape, 11.6 inches (29.5 cm) in diameter and 22.9 inches (58 cm) long. The above schematic illustrates what were probably its essential features.
Eleven days later, July 3, 1956, the Swan was test-fired again at Eniwetok, as the Mohawk shot of Redwing. This time it served as the primary, or first stage, of a two-stage thermonuclear device, a role it played in a dozen such tests during the 1950s. Swan was the first off-the-shelf, multi-use primary, and the prototype for all that followed.
After the success of Swan, 11 or 12 inches seemed to become the standard diameter of boosted single-stage devices tested during the 1950s. Length was usually twice the diameter, but one such device, which became the W54 warhead, was closer to a sphere, only 15 inches long. It was tested two dozen times in the 1957-62 period before being deployed. No other design had such a long string of test failures. Since the longer devices tended to work correctly on the first try, there must have been some difficulty in flattening the two high explosive lenses enough to achieve the desired length-to-width ratio.
One of the applications of the W54 was the Davy Crockett XM-388 recoilless rifle projectile, shown here in comparison to its Fat Man predecessor, dimensions in inches.
Another benefit of boosting, in addition to making weapons smaller, lighter, and with less fissile material for a given yield, is that it renders weapons immune to radiation interference (RI). It was discovered in the mid-1950s that plutonium pits would be particularly susceptible to partial pre-detonation if exposed to the intense radiation of a nearby nuclear explosion (electronics might also be damaged, but this was a separate issue). RI was a particular problem before effective early warning radar systems because a first strike attack might make retaliatory weapons useless. Boosting reduces the amount of plutonium needed in a weapon to below the quantity which would be vulnerable to this effect.
Two-stage thermonuclear weapons
- Main article: Teller-Ulam design
Pure fission or fusion-boosted fission weapons can be made to yield hundreds of kilotons, at great expense in fissile material and tritium, but by far the most efficient way to increase nuclear weapon yield beyond ten or so kilotons is to tack on a second independent stage, called a secondary.Ivy Mike, the first two-stage thermonuclear detonation, 10.4 megatons, November 1, 1952.
In the 1940s, bomb designers at Los Alamos thought the secondary would be a canister of deuterium in liquified or hydride form. The fusion reaction would be D-D, harder to achieve than D-T, but more affordable. A fission bomb at one end would shock-compress and heat the near end, and fusion would propagate through the canister to the far end. Mathematical simulations showed it wouldn't work, even with large amounts of prohibitively expensive tritium added in.
The entire fusion fuel canister would need to be enveloped by fission energy, to both compress and heat it, as with the booster charge in a boosted primary. The design breakthrough came in January of 1951, when Edward Teller and Stanisław Ulam invented radiation implosion - for nearly three decades known publicly only as the Teller-Ulam H-bomb secret.
The concept of radiation implosion was first tested on May 9, 1951, in the George shot of Operation Greenhouse, Eniwetok, yield 225 kilotons. The first full test was on November 1, 1952, the Mike shot of Operation Ivy, Eniwetok, yield 10.4 megatons.
In radiation implosion, the burst of x-ray energy coming from an exploding primary is captured and contained within an opaque-walled radiation channel which surrounds the nuclear energy components of the secondary. For a millionth of a second, most of the energy of several kilotons of TNT is absorbed by a plasma (superheated gas) generated from plastic foam in the radiation channel. With energy going in and not coming out, the plasma rises to solar core temperatures and expands with solar core pressures. Nearby objects which are still cool are crushed by the temperature difference.
The cool nuclear materials surrounded by the radiation channel are imploded much like the pit of the primary, except with vastly more force. This greater pressure enables the secondary to be significantly more powerful than the primary, without being much larger.A. Warhead before firing; primary (fission bomb) at top, secondary (fusion fuel) at bottom, all suspended in polystyrene foam. B. High-explosive fires in primary, compressing plutonium core into supercriticality and beginning a fission reaction. C. Fission primary emits X-rays which reflects along the inside of the casing, irradiating the polystyrene foam. D. Polystyrene foam becomes plasma, compressing secondary, and fissile uranium (U-235) sparkplug begins to fission. E. Compressed and heated, lithium-6 deuteride fuel begins fusion reaction, neutron flux causes tamper to fission. A fireball is starting to form...
For example, for the Redwing Mohawk test on July 3, 1956, a secondary called the Flute was attached to the Swan primary. The Flute was 15 inches (38 cm) in diameter and 23.4 inches (59 cm) long, about the size of the Swan. But it weighed ten times as much and yielded 24 times as much energy (355 kilotons, vs 15 kilotons).
Equally important, the active ingredients in the Flute probably cost no more than those in the Swan. Most of the fission came from cheap U-238, and the tritium was manufactured in place during the explosion. Only the spark plug at the axis of the secondary needed to be fissile.
A spherical secondary can achieve higher implosion densities than a cylindrical secondary, because spherical implosion pushes in from all directions toward the same spot. However, in warheads yielding more than one megaton, the diameter of a spherical secondary would be too large for most applications. A cylindrical secondary is necessary in such cases. The small, cone-shaped re-entry vehicles in multiple-warhead ballistic missiles after 1970 tended to have warheads with spherical secondaries, and yields of a few hundred kilotons.
As with boosting, the advantages of the two-stage thermonuclear design are so great that there is little incentive not to use it, once a nation has mastered the technology.
In engineering terms, radiation implosion allows for the exploitation of several known features of nuclear bomb materials which heretofore had eluded practical application. For example:
- The best way to store deuterium in a reasonably dense state is to chemically bond it with lithium, as lithium deuteride. But the lithium-6 isotope is also the raw material for tritium production, and an exploding bomb is a nuclear reactor. Radiation implosion will hold everything together long enough to permit the complete conversion of lithium-6 into tritium, while the bomb explodes. So the bonding agent for deuterium permits use of the D-T fusion reaction without any pre-manufactured tritium being stored in the secondary. The tritium production constraint disappears.
- For the secondary to be imploded by the hot, radiation-induced plasma surrounding it, it must remain cool for the first microsecond, i.e., it must be encased in a massive radiation (heat) shield. The shield's massiveness allows it to double as a tamper, adding momentum and duration to the implosion. No material is better suited for both of these jobs than ordinary, cheap uranium-238, which happens, also, to undergo fission when struck by the neutrons produced by D-T fusion. This casing, called the pusher, thus has three jobs: to keep the secondary cool, to hold it, inertially, in a highly compressed state, and, finally, to serve as the chief energy source for the entire bomb. The consumable pusher makes the bomb more a uranium fission bomb than a hydrogen fusion bomb. It is noteworthy that insiders never used the term hydrogen bomb.
- Finally, the heat for fusion ignition comes not from the primary but from a second fission bomb called the spark plug, imbedded in the heart of the secondary. The implosion of the secondary implodes this spark plug, detonating it and igniting fusion in the material around it, but the spark plug then continues to fission in the neutron-rich environment until it is fully consumed, adding significantly to the yield.
The initial impetus behind the two-stage weapon was President Truman's 1950 promise to build a 10-megaton hydrogen superbomb as America's response to the 1949 test of the first Soviet fission bomb. But the resulting invention turned out to be the cheapest and most compact way to build small nuclear bombs as well as large ones, erasing any meaningful distinction between A-bombs and H-bombs, and between boosters and supers. All the best techniques for fission and fusion explosions are incorporated into one all-encompassing, fully-scalable design principle. Even six-inch diameter nuclear artillery shells can be two-stage thermonuclears.
In the ensuing fifty years, nobody has come up with a better way to build a nuclear bomb. It is the design of choice for the U.S., Russia, Britain, France, and China, the five thermonuclear powers. The other nuclear-armed nations, Israel, India, Pakistan, and North Korea, probably have single-stage weapons, possibly boosted.
In a two-stage thermonuclear weapon, three types of energy emerge from the primary to impact the secondary: the expanding hot gases from high explosive charges which implode the primary, plus the electromagnetic radiation and the neutrons from the primary's nuclear detonation. An essential energy transfer modulator called the interstage, between the primary and the secondary, protects the secondary from the hot gases and channels the electromagnetic radiation and neutrons toward the right place at the right time.
There is very little information in the open literature about the mechanism of the interstage. Its first mention in a U.S. government document formally released to the public appears to be a caption in a recent graphic promoting the Reliable Replacement Warhead Program. If built, this new design would replace "toxic, brittle material" and "expensive 'special' material" in the interstage. This statement suggests the interstage may contain beryllium to moderate the flux of neutrons from the primary, and perhaps something to absorb and re-radiate the x-rays in a particular manner.
The interstage and the secondary are encased together inside a stainless steel membrane to form the canned subassembly (CSA), an arrangement which has never been depicted in any open-source drawing. The most detailed illustration of an interstage shows a British thermonuclear weapon with a cluster of items between its primary and a cylindrical secondary. They are labeled "end-cap and neutron focus lens," "reflector/neutron gun carriage," and "reflector wrap." The origin of the drawing, posted on the internet by Greenpeace, is uncertain, and there is no accompanying explanation.
While every nuclear weapon design falls into one of the above categories, specific designs have occasionally become the subject of news accounts and public discussion, often with incorrect descriptions about how they work and what they do. Examples:
All modern nuclear weapons make some use of D-T fusion. Even pure fission weapons include neutron generators which are high-voltage vacuum tubes containing trace amounts of tritium and deuterium.
However, in the public perception, hydrogen bombs, or H-bombs, are multi-megaton devices a thousand times more powerful than Hiroshima's Little Boy. Such high-yield bombs are actually two-stage thermonuclears, scaled up to the desired yield, with uranium fission, as usual, providing most of their destructive energy.
The idea of the hydrogen bomb first came to public attention in 1949, when prominent scientists openly recommended against building nuclear bombs more powerful than the standard pure-fission model, on both moral and practical grounds. Their assumption was that critical mass considerations would limit the potential size of fission explosions, but that a fusion explosion could be as large as its supply of fuel, which has no critical mass limit. In 1949, the Russians exploded their first fission bomb, and in 1950 President Truman ended the H-bomb debate by ordering the Los Alamos designers to build one.
In 1952, the 10.4-megaton Ivy Mike explosion was announced as the first hydrogen bomb test, reinforcing the idea that hydrogen bombs are a thousand times more powerful than fission bombs.
In 1954, J. Robert Oppenheimer was labeled a hydrogen bomb opponent. The public did not know there were two kinds of hydrogen bomb (neither of which is accurately described as a hydrogen bomb). On May 23, when his security clearance was revoked, item three of the four public findings against him was "his conduct in the hydrogen bomb program." In 1949, Oppenheimer had supported single-stage fusion-boosted fission bombs, to maximize the explosive power of the arsenal given the trade-off between plutonium and tritium production. He opposed two-stage thermonuclear bombs until 1951, when radiation implosion, which he called "technically sweet," first made them practical. He no longer objected. The complexity of his position was not revealed to the public until 1976, thirteen years after his death.
When ballistic missiles replaced bombers in the 1960s, most multi-megaton bombs were replaced by missile warheads (also two-stage thermonuclears) scaled down to one megaton or less.
The first effort to exploit the symbiotic relationship between fission and fusion was a 1940s design that mixed fission and fusion fuel in alternating thin layers. As a single-stage device, it would have been a cumbersome application of boosted fission. It first became practical when incorporated into the secondary of a two-stage thermonuclear weapon.
The U.S. name, Alarm Clock, was a nonsense code name. The Russian name for the same design was more descriptive: Sloika, a layered pastry cake. A single-stage Russian Sloika was tested on August 12, 1953. No single-stage U.S. version was tested, but the Union shot of Operation Castle, April 26, 1954, was a two-stage thermonuclear code-named Alarm Clock. Its yield, at Bikini, was 6.9 megatons.
Because the Russian Sloika test used dry lithium-6 deuteride eight months before the first U.S. test to use it (Castle Bravo, March 1, 1954), it was sometimes claimed that Russia won the H-bomb race. (The 1952 U.S. Ivy Mike test used cryogenically-cooled liquid deuterium as the fusion fuel in the secondary, and employed the D-D fusion reaction.) However, the first Russian test to use a radiation-imploded secondary, the essential feature of a true H-bomb, was on November 23, 1955, three years after Ivy Mike.
Clean bombsBassoon, the prototype for a 3.5-megaton clean bomb or a 25-megaton dirty bomb. Dirty version shown here, before its 1956 test.
On March 1, 1954, America's largest-ever nuclear test explosion, the 15-megaton Bravo shot of Operation Castle at Bikini, delivered a promptly lethal dose of fission-product fallout to more than 6,000 square miles of Pacific Ocean surface. Radiation injuries to Marshall Islanders and Japanese fishermen made that fact public and revealed the role of fission in hydrogen bombs.
In response to the public alarm over fallout, an effort was made to design a clean multi-megaton weapon, relying almost entirely on fusion. Since it takes roughly five megatons of fusion to produce the same blast and fire effect as one megaton of fission, the clean bomb needed to be very large. For the first and only time, a third stage, called the tertiary, was added, using the secondary as its primary. The device was called Bassoon. It was tested as the Zuni shot of Operation Redwing, at Bikini on May 28, 1956. With all the uranium in Bassoon replaced with a substitute material such as lead, its yield was 3.5 megatons, 85% fusion and only 15% fission.
On July 19, AEC Chairman Lewis Strauss said the clean bomb test "produced much of importance . . . from a humanitarian aspect." However, two days later the dirty version of Bassoon, with the uranium parts restored, was tested as the Tewa shot of Redwing. Its 5-megaton yield, 87% fission, was deliberately suppressed to keep fallout within a smaller area. This dirty version was later deployed as the three-stage, 25-megaton Mark-41 bomb, which was carried by U.S. Air Force bombers, but never tested at full yield.
As such, high-yield clean bombs were a public relations exercise. The actual deployed weapons were the dirty version, which maximized yield for the same size device.
- Main article: Cobalt bomb
A fictional doomsday bomb, made popular by Neville Shute's 1957 novel, and subsequent 1959 movie, On the Beach, the cobalt bomb was a hydrogen bomb with a jacket of cobalt metal. The neutron-activated cobalt would supposedly have maximized the environmental damage from radioactive fallout. This bomb was popularized as the 'Doomsday Device' in the 1964 film 'Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb' in the film the bomb brings about the end of mankind by covering the planet in a radioactive shroud for 93 years. The element added to the bombs is referred to in the film as 'cobalt-chlorium G'
Such "salted" weapons were requested by the U.S. Air Force and seriously investigated, possibly built and tested, but not deployed. In the 1964 edition of the DOD/AEC book The Effects of Nuclear Weapons, a new section titled Radiological Warfare clarified the issue. Fission products are as deadly as neutron-activated cobalt. The standard high-fission thermonuclear weapon is automatically a weapon of radiological warfare, as dirty as a cobalt bomb.
Initially, gamma radiation from the fission products from an equivalent size fission-fusion-fission bomb are much more intense than Co-60: 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that Co-60 fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long lived isotopes produced by fission would overtake the 60Co again after about 75 years.
In 1954, to explain the surprising amount of fission-product fallout produced by hydrogen bombs, Ralph Lapp coined the term fission-fusion-fission to describe a process inside what he called a three-stage thermonuclear weapon. His process explanation was correct, but his choice of terms caused confusion in the open literature. The stages of a nuclear weapon are not fission, fusion, and fission. They are the primary, the secondary, and, in one exceptionally powerful weapon, the tertiary. Each of these stages employs fission, fusion, and fission.
- Main article: Neutron bomb
While high-yield clean bombs were never deployed, some low-yield clean bombs were. Officially known as enhanced radiation weapons, ERWs, they are more accurately described as suppressed yield weapons. When the yield of a nuclear weapon is less than one kiloton, its lethal radius from blast, 700 m (2300 ft), is less than that from its neutron radiation. If a one-kiloton ERW is exploded 800 m above ground, buildings at ground zero will survive but people in them will die of radiation illness caused by neutrons and other fireball radiation.
Although the buildings would survive the blast, neutron activation would make them radioactive. If detonation occurred at a lower altitude, the full force of one kiloton (i.e., four thousand 500 lb bombs) would flatten them.
ERWs were two-stage thermonuclears with all non-essential uranium removed to minimize fission yield. Fusion provided the neutrons. Developed in the 1950s, they were first deployed in the 1970s, by U.S. forces in Europe. The last ones were retired in the 1990s.Energy distribution of weapon Standard Enhanced Blast 50% 40% Thermal energy 35% 25% Instant radiation 5% 30% Residual radiation 10% 5%
Samuel Cohen in 1958 investigated a low-yield 'clean' nuclear weapon and discovered that the 'clean' bomb case thickness scales as the cube-root of yield. So a larger percentage of neutrons escapes from a small detonation, due to the thinner case required to reflect back X-rays during the secondary stage ignition. For example, a 1-kiloton bomb only needs a case 1/10th the thickness of that required for 1-megaton.
So although most neutrons are absorbed by the casing in a 1-megaton bomb, in a 1-kiloton bomb they would mostly escape. A neutron bomb is only feasible if the yield is sufficiently high that efficient fusion stage ignition is possible, and if the yield is low enough that the case thickness will not absorb too many neutrons. This means that neutron bombs have a yield range of 1-10 kilotons, with fission proportion varying from 50% at 1-kiloton to 25% at 10-kilotons (all of which comes from the primary stage). The neutron output per kiloton is then 10-15 times greater than for a pure fission implosion weapon or for a strategic warhead like a W87 or W88.
Oralloy thermonuclear warheads
In 1999, nuclear weapon design was in the news again, for the first time in decades. In January, the U.S. House of Representatives released the Cox Report (Christopher Cox R-CA) which alleged that China had somehow acquired classified information about the U.S. W88 warhead. Nine months later, Wen Ho Lee, a Taiwanese immigrant working at Los Alamos, was publicly accused of spying, arrested, and served nine months in pre-trial detention, before the case against him was dismissed. It is not clear that there was, in fact, any espionage.
In the course of eighteen months of news coverage, the W88 warhead was described in unusual detail. The New York Times printed a schematic diagram on its front page. The most detailed drawing appeared in A Convenient Spy, the 2001 book on the Wen Ho Lee case by Dan Stober and Ian Hoffman, adapted and shown here with permission.
Designed for use on Trident II (D-5) submarine-launched ballistic missiles, the W88 entered service in 1990 and was the last warhead designed for the U.S. arsenal. It has been described as the most advanced, although open literature accounts do not indicate any major design features that were not available to U.S. designers in 1958.
The above diagram shows all the standard features of ballistic missile warheads since the 1960s, with two exceptions that give it a higher yield for its size.
- The outer layer of the secondary, called the "pusher," which serves three functions: heat shield, tamper, and fission fuel, is made of U-235 instead of U-238, hence the name Oralloy (U-235) Thermonuclear. Being fissile, rather than merely fissionable, allows the pusher to fission faster and more completely, increasing yield. This feature is available only to nations with a great wealth of fissile uranium. The U.S. is estimated to have 500 tons.
- The secondary is located in the wide end of the re-entry cone, where it can be larger, and thus more powerful. The usual arrangement is to put the heavier, denser secondary in the narrow end for greater aerodynamic stability during re-entry from outer space, and to allow more room for a bulky primary in the wider part of the cone. (The W87 warhead drawing in the previous section shows the usual arrangement.) Because of this new geometry, the W88 primary uses compact conventional high explosives (CHE) to save space, rather than the more usual, and bulky but safer, insensitive high explosives (IHE). The re-entry cone probably has ballast in the nose for aerodynamic stability.
Notice that the alternating layers of fission and fusion material in the secondary are an application of the Alarm Clock/Sloika principle.
Reliable replacement warhead
- Main article: Reliable Replacement Warhead
The United States has not produced any nuclear warheads since 1989, when the Rocky Flats pit production plant, near Boulder, Colorado, was shut down for environmental reasons. With the end of the Cold War coming two years later, the production line has remained idle except for inspection and maintenance functions.
The National Nuclear Security Administration, the latest successor for nuclear weapons to the Atomic Energy Commission and the Department of Energy, has proposed building a new pit facility and starting the production line for a new warhead called the Reliable Replacement Warhead (RRW). Two advertised safety improvements of the RRW would be a return to the use of "insensitive high explosives which are far less susceptible to accidental detonation," and the elimination of "certain hazardous materials, such as beryllium, that are harmful to people and the environment." Since the new warhead would not require any nuclear testing, it could not use a new design with untested concepts.
The Weapon Design Laboratories
- Main article: Lawrence Berkeley National Laboratory
The first systematic exploration of nuclear weapon design concepts took place in the summer of 1942 at the University of California, Berkeley. Important early discoveries had been made at the adjacent Lawrence Berkeley Laboratory, such as the 1940 production and isolation of plutonium. A Berkeley professor, J. Robert Oppenheimer, had just been hired to run the nation's secret bomb design effort. His first act was to convene the 1942 summer conference.
By the time he moved his operation to the new secret town of Los Alamos, New Mexico, in the spring of 1943, the accumulated wisdom on nuclear weapon design consisted of five lectures by Berkeley professor Robert Serber, transcribed and distributed as the Los Alamos Primer. The Primer addressed fission energy, neutron production and capture, nuclear chain reactions, critical mass, tampers, predetonation, and three methods of assembling a bomb: gun assembly, implosion, and "autocatalytic methods," the one approach that turned out to be a dead end.
- Main article: Los Alamos National Laboratory
At Los Alamos, it was found in April 1944 by Emilio G. Segrè that the proposed Thin Man Gun assembly type bomb would not work for plutonium because of predetonation problems caused by Pu-240 impurities. So Fat Man the Implosion type bomb was given high priority as the only option for plutonium. The Berkeley discussions had generated theoretical estimates of critical mass, but nothing precise. The main wartime job at Los Alamos was the experimental determination of critical mass, which had to wait until sufficient amounts of fissile material arrived from the production plants: uranium from Oak Ridge, Tennessee, and plutonium from the Hanford site in Washington.
In 1945, using the results of critical mass experiments, Los Alamos technicians fabricated and assembled components for four bombs: the Trinity Gadget, Little Boy, Fat Man, and an unused spare Fat Man. After the war, those who could, including Oppenheimer, returned to university teaching positions. Those who remained worked on levitated and hollow pits and conducted weapon effects tests such as Crossroads Able and Baker at Bikini Atoll in 1946.
All of the essential ideas for incorporating fusion into nuclear weapons originated at Los Alamos between 1946 and 1952. After the Teller-Ulam radiation implosion breakthrough of 1951, the technical implications and possibilities were fully explored, but ideas not directly relevant to making the largest possible bombs for long-range Air Force bombers were shelved.
Because of Oppenheimer's initial position in the H-bomb debate, in opposition to large thermonuclear weapons, and the assumption that he still had influence over Los Alamos despite his departure, political allies of Edward Teller decided he needed his own laboratory in order to pursue H-bombs. By the time it was opened in 1952, in Livermore, California, Los Alamos had finished the job Livermore was designed to do.
- Main article: Lawrence Livermore National Laboratory
With its original mission no longer available, the Livermore lab tried radical new designs, that failed. Its first three nuclear tests were fizzles: in 1953, two single-stage fission devices with uranium hydride pits, and in 1954, a two-stage thermonuclear device in which the secondary heated up prematurely, too fast for radiation implosion to work properly.
Shifting gears, Livermore settled for taking ideas Los Alamos had shelved and developing them for the Army and Navy. This led Livermore to specialize in small-diameter tactical weapons, particularly ones using two-point implosion systems, such as the Swan. Small-diameter tactical weapons became primaries for small-diameter secondaries. Around 1960, when the superpower arms race became a ballistic missile race, Livermore warheads were more useful than the large, heavy Los Alamos warheads. Los Alamos warheads were used on the first intermediate-range ballistic missiles, IRBMs, but smaller Livermore warheads were used on the first intercontinental ballistic missiles, ICBMs, and submarine-launched ballistic missiles, SLBMs, as well as on the first multiple warhead systems on such missiles.
In 1957 and 1958 both labs built and tested as many designs as possible, in anticipation that a planned 1958 test ban might become permanent. By the time testing resumed in 1961 the two labs had become duplicates of each other, and design jobs were assigned more on workload considerations than lab specialty. Some designs were horse-traded. For example, the W38 warhead for the Titan I missile started out as a Livermore project, was given to Los Alamos when it became the Atlas missile warhead, and in 1959 was given back to Livermore, in trade for the W54 Davy Crockett warhead, which went from Livermore to Los Alamos.
The period of real innovation was ending by then, anyway. Warhead designs after 1960 took on the character of model changes, with every new missile getting a new warhead for marketing reasons. The chief substantive change involved packing more fissile uranium into the secondary, as it became available with continued uranium enrichment and the dismantlement of the large high-yield bombs.
Nuclear weapons are designed by trial and error. The trial often involves exploding a prototype.
In a nuclear explosion, a large number of discrete events, with various probabilities, aggregate into short-lived, chaotic energy flows inside the device casing. Complex mathematical models are required to approximate the processes, and in the 1950s there were no computers powerful enough to run them properly. Even today's computers and their codes are not fully adequate.
It was easy enough to design reliable weapons for the stockpile. If the prototype worked, it could be weaponized and mass produced.
It was much more difficult to understand how it worked or why it failed. Designers gathered as much data as possible during the explosion, before the device destroyed itself, and used the data to calibrate their models, often by inserting fudge factors into equations to make the simulations match experimental results. They also analyzed the weapon debris in fallout to see how much of a potential nuclear reaction had taken place.
An important tool for test analysis was the diagnostic light pipe. A probe inside a test device could transmit information by heating a plate of metal to incandescence, an event that could be recorded at the far end of a long, very straight pipe.
The picture below shows the Shrimp device, detonated on March 1, 1954 at Bikini, as the Castle Bravo test. Its 15-megaton explosion was the largest ever by the United States. The silhouette of a man is shown for scale. The device is supported from below, at the ends. The pipes going into the shot cab ceiling, which appear to be supports, are diagnostic light pipes. The eight pipes at the right end (1) sent information about the detonation of the primary. Two in the middle (2) marked the time when x-radiation from the primary reached the radiation channel around the secondary. The last two pipes (3) noted the time radiation reached the far end of the radiation channel, the difference between (2) and (3) being the radiation transit time for the channel.
From the shot cab, the pipes turned horizontal and traveled 7500 ft (2.3 km), along a causeway built on the Bikini reef, to a remote-controlled data collection bunker on Namu Island.
While x-rays would normally travel at the speed of light through a low density material like the plastic foam channel filler between (2) and (3), the intensity of radiation from the exploding primary created a relatively opaque radiation front in the channel filler which acted like a slow-moving logjam to retard the passage of radiant energy. Behind this moving front was a fully-ionized, low-z (low atomic number) plasma heated to 20,000 degrees Celsius, soaking up energy like a black box, and eventually driving the implosion of the secondary.
The radiation transit time, on the order of half a microsecond, is the time it takes the entire radiation channel to reach thermal equilibrium as the radiation front moves down its length. The implosion of the secondary is based on the temperature difference between the hot channel and the cool interior of the secondary. Its timing is important because the interior of the secondary is subject to neutron preheat.
While the radiation channel is heating and starting the implosion, neutrons from the primary catch up with the x-rays, penetrate into the secondary and start breeding tritium with the third reaction noted in the first section above. This Li-6 + n reaction is exothermic, producing 5 Mev per event. The spark plug is not yet compressed and thus is not critical, so there won't be significant fission or fusion. But if enough neutrons arrive before implosion of the secondary is complete, the crucial temperature difference will be degraded. This is the reported cause of failure for Livermore's first thermonuclear design, the Morgenstern device, tested as Castle Koon, April 7, 1954.
These timing issues are measured by light-pipe data. The mathematical simulations which they calibrate are called radiation flow hydrodynamics codes, or channel codes. They are used to predict the effect of future design modifications.
It is not clear from the public record how successful the Shrimp light pipes were. The data bunker was far enough back to remain outside the mile-wide crater, but the 15-megaton blast, two and a half times greater than expected, breached the bunker by blowing its 20-ton door off the hinges and across the inside of the bunker. (The nearest people were twenty miles farther away, in a bunker that survived intact.)
The most interesting data from Castle Bravo came from radio-chemical analysis of weapon debris in fallout. Because of a shortage of enriched lithium-6, 60% of the lithium in the Shrimp secondary was ordinary lithium-7, which doesn't breed tritium as easily as lithium-6 does. But it does breed lithium-6 as the product of an "n, 2n" reaction (one neutron in, two neutrons out), a known fact, but with unknown probability. The probability turned out to be high.
Fallout analysis revealed to designers that, with the n, 2n reaction, the Shrimp secondary effectively had two and half times as much lithium-6 as expected. The tritium, the fusion yield, the neutrons, and the fission yield were all increased accordingly.
As noted above, Bravo's fallout analysis also told the outside world, for the first time, that thermonuclear bombs are more fission devices than fusion devices. A Japanese fishing boat named the Lucky Dragon sailed home with enough fallout on its decks to allow scientists in Japan and elsewhere to determine, and announce, that most of the fallout had come from the fission of U-238 by fusion-produced 14 MeV neutrons.
Underground testingSubsidence Craters at Yucca Flat, Nevada Test Site.
The global alarm over radioactive fallout, which began with the Castle Bravo event, eventually drove nuclear testing underground. The last U.S. above-ground test took place at Johnston Island on November 4, 1962. During the next three decades, until September 23, 1992, the U.S. conducted an average of 2.4 underground nuclear explosions per month, all but a few at the Nevada Test Site (NTS) northwest of Las Vegas.
The Yucca Flat section of the NTS is covered with subsidence craters resulting from the collapse of terrain over intensely radioactive underground caverns created by nuclear explosions (see photo).
After the 1974 Threshold Test Ban Treaty (TTBT), which limited underground explosions to 150 kilotons or less, warheads like the half-megaton W88 had to be tested at less than full yield. Since the primary must be detonated at full yield in order to generate data about the implosion of the secondary, the reduction in yield had to come from the secondary. Replacing much of the lithium-6 deuteride fusion fuel with lithium-7 hydride limited the deuterium available for fusion, and thus the overall yield, without changing the dynamics of the implosion. The functioning of the device could be evaluated using light pipes, other sensing devices, and analysis of trapped weapon debris. The full yield of the stockpiled weapon could be calculated by extrapolation.
When two-stage weapons became standard in the early 1950s, weapon design determined the layout of America's new, widely dispersed production facilities, and vice versa.
Because primaries tend to be bulky, especially in diameter, plutonium is the fissile material of choice for pits, with beryllium reflectors. It has a smaller critical mass than uranium. The Rocky Flats plant in Boulder, Colorado, was built in 1952 for pit production and consequently became the plutonium and beryllium fabrication facility.
The Y-12 plant in Oak Ridge, Tennessee, where mass spectrometers called Calutrons had enriched uranium for the Manhattan Project, was redesigned to make secondaries. Fissile U-235 makes the best spark plugs because its critical mass is larger, especially in the cylindrical shape of early thermonuclear secondaries. Early experiments used the two fissile materials in combination, as composite Pu-Oy pits and spark plugs, but for mass production, it was easier to let the factories specialize: plutonium pits in primaries, uranium spark plugs and pushers in secondaries.
Y-12 made lithium-6 deuteride fusion fuel and U-238 parts, the other two ingredients of secondaries.
The Savannah River plant in Aiken, South Carolina, also built in 1952, operated nuclear reactors which converted U-238 into Pu-239 for pits, and lithium-6 (produced at Y-12) into tritium for booster gas. Since its reactors were moderated with heavy water, deuterium oxide, it also made deuterium for booster gas and for Y-12 to use in making lithium-6 deuteride.
Warhead design safety
- Gun-type weapons
It is inherently dangerous to have a weapon containing a quantity and shape of fissile material which can form a critical mass through a relatively simple accident. Because of this danger, the high explosives in Little Boy (four bags of Cordite powder) were inserted into the bomb in flight, shortly after takeoff on August 6, 1945. It was the first time a gun-type nuclear weapon had ever been fully assembled.
Gun-type weapons have always been inherently unsafe.
- In-flight pit insertion
Neither of these effects is likely with implosion weapons since there is normally insufficient fissile material to form a critical mass without the correct detonation of the lenses. However, the earliest implosion weapons had pits so close to criticality that accidental detonation with some nuclear yield was a concern.
On August 9, 1945, Fat Man was loaded onto its airplane fully assembled, but later, when levitated pits made a space between the pit and the tamper, it was feasible to utilize in-flight pit insertion. The bomber would take off with no fissile material in the bomb. Some older implosion-type weapons, such as the US Mark 4 and Mark 5, used this system.
In-flight pit insertion will not work with a hollow pit in contact with its tamper.
- Steel ball safety method
As shown in the diagram, one method used to decrease the likelihood of accidental detonation used metal balls. The balls were emptied into the pit; this would prevent detonation by increasing density of the hollowed pit. This design was used in the Green Grass weapon, also known as the Interim Megaton Weapon and was also used in Violet Club and the Yellow Sun Mk.1 bombs.
- Chain safety method
Alternatively, the pit can be "safed" by having its normally-hollow core filled with an inert material such as a fine metal chain, possibly made of cadmium to absorb neutrons. While the chain is in the center of the pit, the pit can't be compressed into an appropriate shape to fission; when the weapon is to be armed, the chain is removed. Similarly, although a serious fire could detonate the explosives, destroying the pit and spreading plutonium to contaminate the surroundings as has happened in several weapons accidents, it could not however, cause a nuclear explosion.
- Wire safety method
The US W47 warhead used in Polaris A1 and Polaris A2 had a safety device consisting of a boron-coated-wire inserted into the hollow pit at manufacture. The warhead was armed by withdrawing the wire onto a spool driven by an electric motor. However, once withdrawn the wire could not be re-inserted.
- One-point safety
While the firing of one detonator out of many will not cause a hollow pit to go critical, especially a low-mass hollow pit that requires boosting, the introduction of two-point implosion systems made that possibility a real concern.
In a two-point system, if one detonator fires, one entire hemisphere of the pit will implode as designed. The high-explosive charge surrounding the other hemisphere will explode progressively, from the equator toward the opposite pole. Ideally, this will pinch the equator and squeeze the second hemisphere away from the first, like toothpaste in a tube. By the time the explosion envelops it, its implosion will be separated both in time and space from the implosion of the first hemisphere. The resulting dumbbell shape, with each end reaching maximum density at a different time, may not become critical.
Unfortunately, it is not possible to tell on the drawing board how this will play out. Nor is it possible using a dummy pit of U-238 and high-speed x-ray cameras, although such tests are helpful. For final determination, a test needs to be made with real fissile material. Consequently, starting in 1957, a year after Swan, both labs began one-point safety tests.
Out of 25 one-point safety tests conducted in 1957 and 1958, seven had zero or slight nuclear yield (success), three had high yields of 300 kt to 500 kt (severe failure), and the rest had unacceptable yields between those extremes.
Of particular concern was Livermore's W47 warhead for the Polaris submarine missile. The last test before the 1958 moratorium was a one-point test of the W47 primary, which had an unacceptably high nuclear yield of 400 lb of TNT equivalent (Hardtack II Titania). With the test moratorium in force, there was no way to refine the design and make it inherently one-point safe. Los Alamos had a suitable primary that was one-point safe, but rather than share with Los Alamos the credit for designing the first SLBM warhead, Livermore chose to use mechanical safing on its own inherently unsafe primary. The wire safety scheme described above was the result.
It turns out that the W47 may have been safer than anticipated. The wire-safety system may have rendered most of the warheads "duds," unable to fire when detonated.
When testing resumed in 1961, and continued for three decades, there was sufficient time to make all warhead designs inherently one-point safe, without need for mechanical safing.
In addition to the above steps to reduce the probability of a nuclear detonation arrising from a single fault, locking mechanisms referred to by NATO states as Permissive Action Links are sometimes attached to the control mechanisms for nuclear warheads. Permissive Action Links act solely to prevent an unauthorised use of a nuclear weapon.
- ^ The physics package is the nuclear explosive module inside the bomb casing, missile warhead, or artillery shell, etc., which delivers the weapon to its target. While photographs of weapon casings are common, photographs of the physics package are quite rare, even for the oldest and crudest nuclear weapons. For a photograph of a modern physics package see W80.
- ^ Carson Mark, Theodore Taylor, Eugene Eyster, William Maraman, and Jacob Wechsler, "Can Terrorists Build Nuclear Weapons?" Nuclear Control Institute, undated (the first author died in 1997).
- ^ The United States and the Soviet Union were the only nations to build large nuclear arsenals with every possible type of nuclear weapon. The U.S. had a four-year head start and was the first to produce fissile material and fission weapons, all in 1945. The only Soviet claim for a design first was the Joe 4 detonation on August 12, 1953, said to be the first deliverable hydrogen bomb. However, as Herbert York first revealed in The Advisors: Oppenheimer, Teller and the Superbomb (W.H. Freeman, 1976), it was not a true hydrogen bomb (it was a boosted fission weapon of the Sloika/Alarm Clock type, not a two-stage thermonuclear). Soviet dates for the essential elements of warhead miniaturization – boosted, hollow-pit, two-point, air lens primaries – are not available in the open literature, but the larger size of Soviet ballistic missiles is often explained as evidence of an initial Soviet difficulty in miniaturizing warheads.
- ^ The main source for this section is Samuel Glasstone and Philip Dolan, The Effects of Nuclear Weapons, Third Edition, 1977, U.S. Dept of Defense and U.S. Dept of Energy (see links in General References, below), with the same information in more detail in Samuel Glasstone, Sourcebook on Atomic Energy, Third Edition, 1979, U.S. Atomic Energy Commission, Krieger Publishing.
- ^ Glasstone and Dolan, Effects, p. 12.
- ^ Glasstone, Sourcebook, p. 503.
- ^ a b Glasstone and Dolan, Effects, p. 21.
- ^ "Restricted Data Declassification Decisions from 1945 until Present" - "Fact that plutonium and uranium may be bonded to each other in unspecified pits or weapons."
- ^ "Restricted Data Declassification Decisions from 1946 until Present"
- ^ Fissionable Materials section of the Nuclear Weapons FAQ, Carey Sublette, accessed Sept 23, 2006
- ^ All information on nuclear weapon tests comes from Chuck Hansen, The Swords of Armageddon: U.S. Nuclear Weapons Development since 1945, October 1995, Chucklea Productions, Volume VIII, p. 154, Table A-1, "U.S. Nuclear Detonations and Tests, 1945-1962."
- ^ Nuclear Weapons FAQ: 126.96.36.199 Hybrid Assembly Techniques, accessed December 1, 2007. Drawing adapted from the same source.
- ^ Nuclear Weapons FAQ: 188.8.131.52.2.4 Cylindrical and Planar Shock Techniques, accessed December 1, 2007.
- ^ "Restricted Data Declassification Decisions from 1946 until Present", Section V.B.2.k "The fact of use in high explosive assembled (HEA) weapons of spherical shells of fissile materials, sealed pits; air and ring HE lenses," declassified November 1972.
- ^ Howard Morland, "Born Secret," Cardozo Law Review, March 2005, pp. 1401-1408.
- ^ "Improved Security, Safety & Manufacturability of the Reliable Replacement Warhead," NNSA March 2007.
- ^ A 1976 drawing which depicts an interstage that absorbs and re-radiates x-rays. From Howard Morland, "The Article," Cardozo Law Review, March 2005, p 1374.
- ^ "SAND8.8 - 1151 Nuclear Weapon Data -- Sigma I," Sandia Laboratories, September 1988.
- ^ The Greenpeace drawing. From Morland, Cardozo Law Review, March 2005, p 1378.
- ^ Herbert York, The Advisors: Oppenheimer, Teller and the Superbomb (1976).
- ^ "The ‘Alarm Clock' . . . became practical only by the inclusion of Li6 (in 1950) and its combination with the radiation implosion." Hans A. Bethe, Memorandum on the History of Thermonuclear Program, May 28, 1952.
- ^ See map.
- ^ Samuel Glasstone, The Effects of Nuclear Weapons, 1962, Revised 1964, U.S. Dept of Defense and U.S. Dept of Energy, pp.464-5. This section was removed from later editions, but, according to Glasstone in 1978, not because it was inaccurate or because the weapons had changed.
- ^ Nuclear Weapons FAQ: 1.6.
- ^ Neutron bomb: Why 'clean' is deadly.
- ^ Broad, William J. (7 September 1999), "Spies versus sweat, the debate over China's nuclear advance," New York Times, p 1. The front page drawing was similar to one that appeared four months earlier in the the San Jose Mercury News.
- ^ Jonathan Medalia, "The Reliable Replacement Warhead Program: Background and Current Developments," CRS Report RL32929, Dec 18, 2007, p CRS-11.
- ^ Richard Garwin, "Why China Won't Build U.S. Warheads", Arms Control Today, April-May 1999.
- ^ Home - NNSA
- ^ DoE Fact Sheet: Reliable Replacement Warhead Program
- ^ Sybil Francis, Warhead Politics: Livermore and the Competitive System of Nuclear Warhead Design, UCRL-LR-124754, June 1995, Ph.D. Dissertation, Massachusetts Institute of Technology, available from National Technical Information Service. This 233-page thesis was written by a weapons-lab outsider for public distribution. The author had access to all the classified information at Livermore that was relevant to her research on warhead design; consequently, she was required to use non-descriptive code words for certain innovations.
- ^ Walter Goad, Declaration for the Wen Ho Lee case, May 17, 2000. Goad began thermonuclear weapon design work at Los Alamos in 1950. In his Declaration, he mentions "basic scientific problems of computability which cannot be solved by more computing power alone. These are typified by the problem of long range predictions of weather and climate, and extend to predictions of nuclear weapons behavior. This accounts for the fact that, after the enormous investment of effort over many years, weapons codes can still not be relied on for significantly new designs."
- ^ Chuck Hansen, The Swords of Armageddon, Volume IV, pp. 211-212, 284.
- ^ The public literature mentions three different force mechanism for this implosion: radiation pressure, plasma pressure, and explosive ablation of the outer surface of the secondary pusher. All three forces are present; and the relative contribution of each is one of the things the computer simulations try to explain. See Teller-Ulam design.
- ^ Dr. John C. Clark, as told to Robert Cahn, "We Were Trapped by Radioactive Fallout," The Saturday Evening Post, July 20, 1957, pp. 17-19, 69-71.
- ^ Richard Rhodes, Dark Sun; the Making of the Hydrogen Bomb, Simon and Schuster, 1995, p. 541.
- ^ Chuck Hansen, The Swords of Armageddon, Volume VII, pp. 396-397.
- ^ Sybil Francis, Warhead Politics, pp. 141, 160.
GeneralWikimedia Commons has media related to: Nuclear weapon design
- Glasstone, Samuel and Dolan, Philip J., The Effects of Nuclear Weapons (third edition) (hosted at the Trinity Atomic Web Site), U.S. Government Printing Office, 1977. PDF Version
- Cohen, Sam, The Truth About the Neutron Bomb: The Inventor of the Bomb Speaks Out, William Morrow & Co., 1983
- Grace, S. Charles, Nuclear Weapons: Principles, Effects and Survivability (Land Warfare: Brassey's New Battlefield Weapons Systems and Technology, vol 10)
- Hansen, Chuck, The Swords of Armageddon: U.S. Nuclear Weapons Development since 1945, October 1995, Chucklea Productions, eight volumes (CD-ROM), two thousand pages.
- Smyth, Henry DeWolf, Atomic Energy for Military Purposes, Princeton University Press, 1945. (see: Smyth Report)
- The Effects of Nuclear War, Office of Technology Assessment (May 1979).
- Rhodes, Richard. Dark Sun: The Making of the Hydrogen Bomb. Simon and Schuster, New York, (1995 ISBN 0-684-82414-0)
- Rhodes, Richard. The Making of the Atomic Bomb. Simon and Schuster, New York, (1986 ISBN 0-684-81378-5)
- Carey Sublette's Nuclear
Weapon Archive is a reliable source of information and has links to other
- Nuclear Weapons Frequently Asked Questions: Section 4.0 Engineering and Design of Nuclear Weapons
- The Federation of American Scientists provides solid information on weapons of mass destruction, including nuclear weapons and their effects
- Globalsecurity.org provides a well-written primer in nuclear weapons design concepts (site navigation on righthand side).
- More information on the design of two-stage fusion bombs
- Militarily Critical Technologies List (MCTL) from the US Government's Defense Technical Information Center
- "Restricted Data Declassification Decisions from 1946 until Present", Department of Energy report series published from 1994 until January 2001 which lists all known declassification actions and their dates. Hosted by Federation of American Scientists.
- The Holocaust Bomb: A Question of Time is an update of the 1979 court case USA v. The Progressive, with links to supporting documents on nuclear weapon design.
- Annotated bibliogrphy on nuclear weapons design from the Alsos Digital Library for Nuclear Issues
Inertial fusion · Pressurized water (PWR) · Boiling water (BWR) · Generation IV · Fast breeder (FBR) · Fast neutron (FNR) · Magnox · Advanced gas-cooled (AGR) · Gas-cooled fast (GFR) · Molten salt (MSR) · Liquid-metal-cooled (LMFR) · Lead-cooled fast (LFR) · Sodium-cooled fast (SFR) · Supercritical water (SCWR) · Very high temperature (VHTR) · Pebble bed · Integral Fast (IFR) · SSTAR
History · Design · Warfare · Arms race · Explosion (effects) · Testing (underground) · Delivery · Proliferation · Yield (TNTe)
List of states with nuclear weapons · List of nuclear tests · List of nuclear weapons
Link former page on this page
Related word on this page | http://wikipedia.atpedia.com/en/articles/n/u/c/Nuclear_weapon_design.html | 13 |
51 | Apps for Common Core Math Standards, Grades 6-8
0 comment(s) so far...
December 1, 2011 By: Vicki Windman
The sixth grade standard includes five components:
- Ratios and Proportional Relationships - Understand ratio concepts and use ratio reasoning to solve problems.
- The Number System - Extend previous understanding of fractions including multiplying and dividing fractions, compute fluently with multi-digit numbers and find common factors and multiples.
- Expressions and Equations - Reason about and solve one-variable equations and inequalities.
- Geometry - Solve real-world and mathematical problems involving area, surface area, and volume.
- Statistics and Probability - Develop understanding of statistical variability.
6th Grade Math Testing Prep $2.99; Teacher upgrade $4.99 - This app covers all of the core standards, allowing students to move at their own pace with quizzes to evaluate student understanding. Topics include: ratios, probability, statistics, algebra, geometry, and multiple step procedures. The upgrade for teachers allows teachers to enter as many students as needed, set the number of questions, and review scores and track student improvement.
Algebra Touch $2.99 - Current material covers: Simplification, Like Terms, Commutativity, Order of Operations, Factorization, Prime Numbers, Elimination, Isolation, Variables, Basic Equations, Distribution, Factoring Out, Substitution, and 'More Advanced' mode.
Greatest Common Factor $.99 - Calculates the GCF or LCM of two numbers.
Elevated Math Free - Lessons include: Numbers and Operations, Measurement, Geometry Algebra and Data Analysis and Probability (grades 6-8).
Factor Race- Algebra Free - A game in which the player must identify the binomial factors of trinomial equations.
Geometry-Volume-Solids-lite Free - The lite version of this app provides a quick way to learn and calculate volume of solids. There are six computer-animated videos on the volume of the cube, rectangular solid, cylinder, sphere, cone and pyramid. The paid addition is $1.99. It includes videos on the cube, cylinder, sphere, cone and pyramid.
The seventh grade standard includes five components:
- Ratios and Proportional Relationships - Analyze proportional relationships and use them to solve real-world and mathematical problems.
- The Number System - extend previous knowledge using the four operations with fractions and the ability to divide and multiply rational numbers.
- Expressions and Equations - Use properties of operations to generate equivalent expressions.
- Geometry - Draw, construct and describe geometrical figures and describe the relationships between them.
- Statistics and Probability - Ability to use random samplings and make inferences about populations, investigate chance processes and develop, use, and evaluate probability models.
7th grade math testing prep $2.99; Teacher upgrade $4.99- Covers the Core Standards from advanced problem solving, fractions, probability, and statistics to logical calculations.
Middle School Math 7th Grade Free - Three levels for each topic: Negative numbers, absolute value and order of operations.
Middle School Math Pro 7th Grade $.99 - Three levels, 10 topics, including adding and subtracting fractions, multiplying and dividing fractions, factors and multiples, decimals, and more. Student guides a monkey down the ladder to get to the next level.
Algebra Concepts for the iPad $1.99 - Geared for 7th grade students. The app includes Variables and Expressions, Properties of Real Numbers, Solving Equations and Polynomials.
Sketchpad Explorer Free - This app was mentioned in an earlier blog. Grades 7-9 specifically are geared to early algebra.
The eighth grade standard also has five components:
- The Number System - Know that there are numbers that are not rational, and approximate them by rational numbers.
- Expressions and Equations - Work with radicals and integer exponents, understand the connections between proportional relationships, lines, and linear equations.
- Functions - Define, evaluate, and compare functions.
- Geometry - Understand congruence and similarity using physical models, transparencies, or geometry; understand and apply the Pythagorean Theorem.
- Statistics and Probability - Investigate patterns of association in bivariate data
8th Grade Math- $2.99 - Helps students understand major math terms and concepts: Sequences and Series, Polynomials, Square Roots, Introduction to Geometry, Triangles and Other Polygons, Pythagorean Theorem, and Trigonometry
Algebra Prep $3.99 - Review and practice equations, graphing, systems, exponents, factoring, rationals and more. Includes videos, practices tests and mini-tests.
Pythagorean Theorem $4.99 - Includes five video examples, eight interactive practice problems, one challenge problem, one worksheet of extra problems and one notes page.
Symmetry-Shuffle $1.99 - This mathematical puzzle allows users to explore line and rotational symmetry, while developing their spatial sense to create strategies to help them solve problems. The app is a fun way to understand the congruence, similarity, and line or rotational symmetry of objects using transformations.
ALinearEqn Linear Equations $.99 - A combined interactive Coaching Calculator and Guide that helps students master the solving of linear equations.
Vicki Windman is a special education teacher at Clarkstown High School South. | http://www.techlearning.com/Default.aspx?tabid=67&EntryId=3498 | 13 |
93 | Historically, the apparent motions of the planets were first understood geometrically (and without regard to gravity) in terms of epicycles, which are the sums of numerous circular motions. Theories of this kind predicted paths of the planets moderately well, until Johannes Kepler was able to show that the motion of the planets were in fact (at least approximately) elliptical motions. In Isaac Newton's Principia (1687), Newton derived the relationships now known as Kepler's laws of planetary motion from a force-based theory of universal gravitation. Albert Einstein's later general theory of relativity was able to account for gravity as due to curvature of space-time, with orbits following geodesics.
In the geocentric model of the solar system, the celestial spheres model was originally used to explain the apparent motion of the planets in the sky in terms of perfect spheres or rings, but after measurements of the exact motion of the planets theoretical mechanisms such as the deferent and epicycles were later added. Although it was capable of accurately predicting the planets position in the sky, more and more epicycles were required over time, and the model became more and more unwieldy.
The basis for the modern understanding of orbits was first formulated by Johannes Kepler whose results are summarised in his three laws of planetary motion. First, he found that the orbits of the planets in our solar system are elliptical, not circular (or epicyclic), as had previously been believed, and that the sun is not located at the center of the orbits, but rather at one focus. Second, he found that the orbital speed of each planet is not constant, as had previously been thought, but rather that the speed of the planet depends on the planet's distance from the sun. And third, Kepler found a universal relationship between the orbital properties of all the planets orbiting the sun. For the planets, the cubes of their distances from the sun are proportional to the squares of their orbital periods. Jupiter and Venus, for example, are respectively about 5.2 and 0.723 AU distant from the sun, their orbital periods respectively about 11.86 and 0.615 years. The proportionality is seen by the fact that the ratio for Jupiter, 5.23/11.862, is practically equal to that for Venus, 0.7233/0.6152, in accord with the relationship.
While the planetary bodies do have elliptical orbits about the Sun, the eccentricity of the orbits is often not large. A circle has an eccentricity of zero, Earth's orbit's eccentricity is 0.0167 meaning that the ratio of its semi-minor (b) to semi-major axis (a) is 99.99%. Mercury has the largest eccentricity of the planets with an eccentricity of 0.2056, b/a=97.86%. (Eris has an eccentricity of 0.441 and Pluto 0.249. For the values for all planets in one table, see Table of planets in the solar system.)
Isaac Newton demonstrated that Kepler's laws were derivable from his theory of gravitation and that, in general, the orbits of bodies subject to gravity were conic sections, if the force of gravity propagated instantaneously. Newton showed that, for a pair of bodies, the orbits' sizes are in inverse proportion to their masses, and that the bodies revolve about their common center of mass. Where one body is much more massive than the other, it is a convenient approximation to take the center of mass as coinciding with the center of the more massive body.
Albert Einstein was able to show that gravity was due to curvature of space-time and was able to remove the assumption of Newton that changes propagate instantaneously. In relativity theory orbits follow geodesic trajectories which approximate very well to the Newtonian predictions. However there are differences that can be used to determine which theory describes reality more accurately. Essentially all experimental evidence that can distinguish between the theories agrees with relativity theory to within experimental measuremental accuracy, but the differences from Newtonian mechanics are usually very small (except where there are very strong gravity fields and very high speeds).
However, Newtonian mechanics is still used for most purposes since Newtonian mechanics is significantly easier to use.
Within a planetary system; planets, dwarf planets, asteroids (a.k.a. minor planets), comets, and space debris orbit the central star in elliptical orbits. A comet in a parabolic or hyperbolic orbit about a central star is not gravitationally bound to the star and therefore is not considered part of the star's planetary system. To date, no comet has been observed in our solar system with a distinctly hyperbolic orbit. Bodies which are gravitationally bound to one of the planets in a planetary system, either natural or artificial satellites, follow orbits about that planet.
Owing to mutual gravitational perturbations, the eccentricities of the orbits of the planets in our solar system vary over time. Mercury, the smallest planet in the Solar System, has the most eccentric orbit. At the present epoch, Mars has the next largest eccentricity while the smallest eccentricities are those of the orbits of Venus and Neptune.
As two objects orbit each other, the periapsis is that point at which the two objects are closest to each other and the apoapsis is that point at which they are the farthest from each other. (More specific terms are used for specific bodies. For example, perigee and apogee are the lowest and highest parts of an Earth orbit, respectively.)
In the elliptical orbit, the center of mass of the orbiting-orbited system will sit at one focus of both orbits, with nothing present at the other focus. As a planet approaches periapsis, the planet will increase in speed, or velocity. As a planet approaches apoapsis, the planet will decrease in velocity.
There are a few common ways of understanding orbits.
As an illustration of an orbit around a planet, the Newton's cannonball model may prove useful (see image below). This is a 'thought experiment', in which a cannon on top of a tall mountain is supposed to be able to fire a cannonball horizontally at any chosen muzzle velocity. The effects of air friction on the cannonball are ignored (or perhaps the mountain is high enough that the cannon will be above the Earth's atmosphere, which comes to the same thing.)
If the cannon fires its ball with a low initial velocity, the trajectory of the ball curves downward and hits the ground (A). As the firing velocity is increased, the cannonball hits the ground farther (B) away from the cannon, because while the ball is still falling towards the ground, the ground is increasingly curving away from it (see first point, above). All these motions are actually "orbits" in a technical sense — they are describing a portion of an elliptical path around the center of gravity — but the orbits are interrupted by striking the Earth.
If the cannonball is fired with sufficient velocity, the ground curves away from the ball at least as much as the ball falls — so the ball never strikes the ground. It is now in what could be called a non-interrupted, or circumnavigating, orbit. For any specific combination of height above the center of gravity and mass of the planet, there is one specific firing velocity (practically unaffected by the mass of the ball where that is as usual very small relative to the Earth's mass) that produces a circular orbit, as shown in (C).
As the firing velocity is increased beyond this, a range of elliptic orbits are produced; one is shown in (D). If the initial firing is above the surface of the Earth as shown, there will also be elliptical orbits at slower velocities; these will come closest to the Earth at the point half an orbit beyond, and directly opposite, the firing point.
At a specific velocity called escape velocity, again dependent on the firing height and mass of the planet, an open orbit such as (E) results — a parabolic trajectory. At even faster velocities the object will follow a range of hyperbolic trajectories. In a practical sense, both of these trajectory types mean the object is "breaking free" of the planet's gravity, and "going off into space".
The velocity relationship of two moving objects with mass can thus be considered in four practical classes, with subtypes:
In many situations relativistic effects can be neglected, and Newton's laws give a highly accurate description of the motion. Then the acceleration of each body is equal to the sum of the gravitational forces on it, divided by its mass, and the gravitational force between each pair of bodies is proportional to the product of their masses and decreases inversely with the square of the distance between them. To this Newtonian approximation, for a system of two point masses or spherical bodies, only influenced by their mutual gravitation (the two-body problem), the orbits can be exactly calculated. If the heavier body is much more massive than the smaller, as for a satellite or small moon orbiting a planet or for the Earth orbiting the Sun, it is accurate and convenient to describe the motion in a coordinate system that is centered on the heavier body, and we can say that the lighter body is in orbit around the heavier. For the case where the masses of two bodies are comparable, an exact Newtonian solution is still available, and qualitatively similar to the case of dissimilar masses, by centering the coordinate system on the center of mass of the two.
Energy is associated with gravitational fields. A stationary body far from another can do external work if it is pulled towards it, and therefore has gravitational potential energy. Since work is required to separate two massive bodies against the pull of gravity, their gravitational potential energy increases as they are separated, and decreases as they approach one another. For point masses the gravitational energy decreases without limit as they approach zero separation, and it is convenient and conventional to take the potential energy as zero when they are an infinite distance apart, and then negative (since it decreases from zero) for smaller finite distances.
With two bodies, an orbit is a conic section. The orbit can be open (so the object never returns) or closed (returning), depending on the total energy (kinetic + potential energy) of the system. In the case of an open orbit, the speed at any position of the orbit is at least the escape velocity for that position, in the case of a closed orbit, always less. Since the kinetic energy is never negative, if the common convention is adopted of taking the potential energy as zero at infinite separation, the bound orbits have negative total energy, parabolic trajectories have zero total energy, and hyperbolic orbits have positive total energy.
An open orbit has the shape of a hyperbola (when the velocity is greater than the escape velocity), or a parabola (when the velocity is exactly the escape velocity). The bodies approach each other for a while, curve around each other around the time of their closest approach, and then separate again forever. This may be the case with some comets if they come from outside the solar system.
A closed orbit has the shape of an ellipse. In the special case that the orbiting body is always the same distance from the center, it is also the shape of a circle. Otherwise, the point where the orbiting body is closest to Earth is the perigee, called periapsis (less properly, "perifocus" or "pericentron") when the orbit is around a body other than Earth. The point where the satellite is farthest from Earth is called apogee, apoapsis, or sometimes apifocus or apocentron. A line drawn from periapsis to apoapsis is the line-of-apsides. This is the major axis of the ellipse, the line through its longest part.
Orbiting bodies in closed orbits repeat their path after a constant period of time. This motion is described by the empirical laws of Kepler, which can be mathematically derived from Newton's laws. These can be formulated as follows:
Note that that while the bound orbits around a point mass, or a spherical body with an ideal Newtonian gravitational field, are all closed ellipses, which repeat the same path exactly and indefinitely, any non-spherical or non-Newtonian effects (as caused, for example, by the slight oblateness of the Earth, or by relativistic effects, changing the gravitational field's behavior with distance) will cause the orbit's shape to depart to a greater or lesser extent from the closed ellipses characteristic of Newtonian two body motion. The 2-body solutions were published by Newton in Principia in 1687. In 1912, Karl Fritiof Sundman developed a converging infinite series that solves the 3-body problem; however, it converges too slowly to be of much use. Except for special cases like the Lagrangian points, no method is known to solve the equations of motion for a system with four or more bodies.
Instead, orbits with many bodies can be approximated with arbitrarily high accuracy. These approximations take two forms.
One form takes the pure elliptic motion as a basis, and adds perturbation terms to account for the gravitational influence of multiple bodies. This is convenient for calculating the positions of astronomical bodies. The equations of motion of the moon, planets and other bodies are known with great accuracy, and are used to generate tables for celestial navigation. Still there are secular phenomena that have to be dealt with by post-newtonian methods.
The differential equation form is used for scientific or mission-planning purposes. According to Newton's laws, the sum of all the forces will equal the mass times its acceleration (F = ma). Therefore accelerations can be expressed in terms of positions. The perturbation terms are much easier to describe in this form. Predicting subsequent positions and velocities from initial ones corresponds to solving an initial value problem. Numerical methods calculate the positions and velocities of the objects a tiny time in the future, then repeat this. However, tiny arithmetic errors from the limited accuracy of a computer's math accumulate, limiting the accuracy of this approach.
Differential simulations with large numbers of objects perform the calculations in a hierarchical pairwise fashion between centers of mass. Using this scheme, galaxies, star clusters and other large objects have been simulated.
Note that the following is a classical (Newtonian) analysis of orbital mechanics, which assumes the more subtle effects of general relativity (like frame dragging and gravitational time dilation) are negligible. Relativistic effects cease to be negligible when near very massive bodies (as with the precession of Mercury's orbit about the Sun), or when extreme precision is needed (as with calculations of the orbital elements and time signal references for GPS satellites).
To analyze the motion of a body moving under the influence of a force which is always directed towards a fixed point, it is convenient to use polar coordinates with the origin coinciding with the center of force. In such coordinates the radial and transverse components of the acceleration are, respectively:
Since the force is entirely radial, and since acceleration is proportional to force, it follows that the transverse acceleration is zero. As a result,
aθ = 0
After integrating, we have
which is actually the theoretical proof of Kepler's 2nd law (A line joining a planet and the sun sweeps out equal areas during equal intervals of time). The constant of integration, h, is the angular momentum per unit mass. It then follows that
where we have introduced the auxiliary variable
where G is the constant of universal gravitation, m is the mass of the orbiting body (planet), and M is the mass of the central body (the Sun). Substituting into the prior equation, we have
So for the gravitational force — or, more generally, for any inverse square force law — the right hand side of the equation becomes a constant and the equation is seen to be the harmonic equation (up to a shift of origin of the dependent variable). The solution is:
where A and θ0 are arbitrary constants.
The equation of the orbit described by the particle is thus:
where e is:
The analysis so far has been two dimensional; it turns out that an unperturbed orbit is two dimensional in a plane fixed in space, and thus the extension to three dimensions requires simply rotating the two dimensional plane into the required angle relative to the poles of the planetary body involved.
The rotation to do this in three dimensions requires three numbers to uniquely determine; traditionally these are expressed as three angles.
The orbital period is simply how long an orbiting body takes to complete one orbit.
It turns out that it takes a minimum 6 numbers to specify an orbit about a body, and this can be done in several ways. For example, specifying the 3 numbers specifying location and 3 specifying the velocity of a body gives a unique orbit that can be calculated forwards (or backwards). However, traditionally the parameters used are slightly different.
In principle once the orbital elements are known for a body, its position can be calculated forward and backwards indefinitely in time. However, in practice, orbits are affected or perturbed, by forces other than gravity due to the central body and thus the orbital elements change over time.
An orbital perturbation is when a force or impulse which is much smaller than the overall force or average impulse of the main gravitating body and which is external to the two orbiting bodies causes an acceleration, which changes the parameters of the orbit over time.
A small radial impulse given to a body in orbit changes the eccentricity, but not the orbital period (to first order). A prograde or retrograde impulse (i.e. an impulse applied along the orbital motion) changes both the eccentricity and the orbital period. Notably, a prograde impulse given at periapsis raises the altitude at apoapsis, and vice versa, and a retrograde impulse does the opposite. A transverse impulse (out of the orbital plane) causes rotation of the orbital plane without changing the period or eccentricity. In all instances, a closed orbit will still intersect the perturbation point.
If an orbit is about a planetary body with significant atmosphere, its orbit can decay because of drag. Particularly at each periapsis, the object experiences atmospheric drag, losing energy. Each time, the orbit grows less eccentric (more circular) because the object loses kinetic energy precisely when that energy is at its maximum. This is similar to the effect of slowing a pendulum at its lowest point; the highest point of the pendulum's swing becomes lower. With each successive slowing more of the orbit's path is affected by the atmosphere and the effect becomes more pronounced. Eventually, the effect becomes so great that the maximum kinetic energy is not enough to return the orbit above the limits of the atmospheric drag effect. When this happens the body will rapidly spiral down and intersect the central body.
The bounds of an atmosphere vary wildly. During solar maxima, the Earth's atmosphere causes drag up to a hundred kilometres higher than during solar minima.
Some satellites with long conductive tethers can also decay because of electromagnetic drag from the Earth's magnetic field. Basically, the wire cuts the magnetic field, and acts as a generator. The wire moves electrons from the near vacuum on one end to the near-vacuum on the other end. The orbital energy is converted to heat in the wire.
Orbits can be artificially influenced through the use of rocket motors which change the kinetic energy of the body at some point in its path. This is the conversion of chemical or electrical energy to kinetic energy. In this way changes in the orbit shape or orientation can be facilitated.
Another method of artificially influencing an orbit is through the use of solar sails or magnetic sails. These forms of propulsion require no propellant or energy input other than that of the sun, and so can be used indefinitely. See statite for one such proposed use.
Orbital decay can also occur due to tidal forces for objects below the synchronous orbit for the body they're orbiting. The gravity of the orbiting object raises tidal bulges in the primary, and since below the synchronous orbit the orbiting object is moving faster than the body's surface the bulges lag a short angle behind it. The gravity of the bulges is slightly off of the primary-satellite axis and thus has a component along the satellite's motion. The near bulge slows the object more than the far bulge speeds it up, and as a result the orbit decays. Conversely, the gravity of the satellite on the bulges applies torque on the primary and speeds up its rotation. Artificial satellites are too small to have an appreciable tidal effect on the planets they orbit, but several moons in the solar system are undergoing orbital decay by this mechanism. Mars' innermost moon Phobos is a prime example, and is expected to either impact Mars' surface or break up into a ring within 50 million years.
Finally, orbits can decay via the emission of gravitational waves. This mechanism is extremely weak for most stellar objects, only becoming significant in cases where there is a combination of extreme mass and extreme acceleration, such as with black holes or neutron stars that are orbiting each other closely.
The standard analysis of orbiting bodies assumes that all bodies consist of uniform spheres, or more generally, concentric shells each of uniform density. It can be shown that such bodies are gravitationally equivalent to point sources.
However, in the real world, many bodies rotate, and this introduces oblateness and distorts the gravity field, and gives a quadrupole moment to the gravitational field which is significant at distances comparable to the radius of the body.
The general effect of this is to change the orbital parameters over time; predominantly this gives a rotation of the orbital plane around the rotational pole of the central body (it perturbs the argument of perigee) in a way that is dependent on the angle of orbital plane to the equator as well as altitude at perigee.
The effects of other gravitating bodies can be very large. For example, the orbit of the Moon cannot be in any way accurately described without allowing for the action of the Sun's gravity as well as the Earth's.
In general when there are more than two gravitating bodies it is referred to as an n-body problem. Most n-body problems have no closed form solution, although there are number of special cases.
For smaller bodies particularly, light and stellar wind can cause significant perturbations to the attitude and direction of motion of the body, and over time can be quite significant. Of the planetary bodies, the motion of asteroids is particularly affected over large periods when the asteroids are rotating relative to the Sun.
Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and Newton's law of universal gravitation. It is a core discipline within space mission design and control. Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including both spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbit plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers. General relativity is a more exact theory than Newton's laws for calculating orbits, and is sometimes necessary for greater accuracy or in high-gravity situations (such as orbits close to the Sun).
The gravitational constant G is measured to be the following (shown with the 3 most common units):
Thus the constant has dimension density−1 time−2. This corresponds to the following properties.
Scaling of distances (including sizes of bodies, while keeping the densities the same) gives similar orbits without scaling the time: if for example distances are halved, masses are divided by 8, gravitational forces by 16 and gravitational accelerations by 2. Hence velocities are halved and orbital periods remain the same. Similarly, when an object is dropped from a tower, the time it takes to fall to the ground remains the same with a scale model of the tower on a scale model of the earth.
Scaling of distances while keeping the masses the same (in the case of point masses, or by reducing the densities) gives similar orbits; if distances are multiplied by 4, gravitational forces and accelerations are divided by 16, velocities are halved and orbital periods are multiplied by 8.
When all densities are multiplied by 4, orbits are the same; gravitational forces are multiplied by 16 and accelerations by 4, velocities are doubled and orbital periods are halved.
When all densities are multiplied by 4, and all sizes are halved, orbits are similar; masses are divided by 2, gravitational forces are the same, gravitational accelerations are doubled. Hence velocities are the same and orbital periods are halved.
In all these cases of scaling. if densities are multiplied by 4, times are halved; if velocities are doubled, forces are multiplied by 16.
These properties are illustrated in the formula (derived from the formula for the orbital period )
(There is currently no text in this page)
An orbit is the path that an object takes in space when it goes around a star, a planet, or a moon. It can also be used as a verb. For instance: “The earth orbits around the Sun.” The word ‘revolves’ has the same meaning.
Many years ago, people thought that the sun orbits in a circle around the earth. Every morning the sun came up in the east and went down in the west. It just seemed to make sense that it was going around the earth. But now, thanks to people like Copernicus and Galileo Galilei, we know that the sun is the center of the solar system, and the earth orbits around it. Since a satellite is an object in space that revolves around another object, the earth is a satellite of the sun, just like the moon is a satellite of the earth! The sun has lots of satellites orbiting around it, like the planets, and thousands of asteroids, comets, and meteoroids. The earth just has one natural satellite (the moon), but there are lots of satellites orbiting the earth.
When people first began to think about orbits, they thought that all orbits had to be perfect circles, and they thought that the circle was a "perfect" shape. But when people began to study the motions of planets carefully, they saw that the planets were not moving in perfect circles. Some of the planets have orbits that are almost perfect circles, and others have orbits that are longer and less like a perfect circle.
An orbital period is the time that it takes for one object - that is, satellite - to orbit around another object. For instance, the Earth's orbital period is one year: 365.25 days. (The extra ".25" is why we have a leap day once every four years.)
Johannes Kepler (lived 1571-1630) wrote mathematical "laws of planetary motion", which gave a good idea of the movements of the planets because he found that the orbits of the planets in our solar system are not really circles, but are really ellipses (a shape like an egg or a "flattened circle"). Which is why planet's orbits are described as elliptical. The more elliptical an orbit is, the more eccentric the orbit is. | http://www.thefullwiki.org/Orbit | 13 |
55 | |This java applet draws the medians and other area bisectors of a triangle. Click and drag within the applet area to change the triangle. What proportion of the triangle is the red deltoid? More information below.|
The blue lines above are the medians of the triangle. Each one connects a vertex of the triangle to the mid-point of the opposite edge. Each median divides the area of the triangle in half. They intersect two-thirds of the way along their length in the centroid of the triangle. The centroid is the centre of mass (or center of gravity - depending on your spelling and approach) of the solid triangle, so a solid triangle would balance on any line through the centroid.
However, it is not true that all straight lines through the centroid divide the area of the triangle in half. While the medians do divide the triangle into two equal areas, other lines through the centroid do not, and in the worst case (when the line through the centroid is parallel to an edge) the tringle on one side is 4/9 of the area of the original triangle, while the trapezium on the other side has an area 5/9 of the original triangle.
Some of the other lines which are area bisectors are shown above in green. Together, their envelope is a deltoid, shown in red, and any point inside the deltoid has three area bisectors passing through it, while any point outside it has only one. The curved edges of the detoid are segments of hyperbolae, and its three vertices are the mid-points of the medians. Since the proportions in the diagram are invariant under affine transformations, the deltoid's area is a fixed proportion of the area of the triangle. By taking a simple triangle, such as the one with corners at (0,0), (0,1) and (1,0), it is not difficult to find that the proportion is 3/4*loge(2)-1/2 = 0.019860... If this seems small (less than a fiftieth the original triangle), remember that the area of the triangle with straight edges using the corners of the deltoid is only a sixteenth of the area of the original triangle and the deltoid is almost a third of this smaller triangle.
It is easy to construct a set of points met by all the medians and other area bisectors. Indeed the intersection of the deltoid with a median or another area biscetor is such a set. The intersection of the deltoid and a median or other area bisector is 1/sqrt(2) - 1/2 = 0.2071... of the length of that median or area bisector.
Any comments on this page?
See the Java code.
This page is based on discussions in the newsgroup geometry.puzzles in October and December 2001. Archives of these can be found here: 1 2 3 4 5.
Zak Seidov produced a couple of related pages on a 3,4, 5 triangle: Centroid and 3-points Deltoid Area
Rouben Rostamian produced a picture of the area of the deltoid
The following links to some of my pages are less directly relevant:
Triangular optical illusion
Area of a triangle (7 times)
Pythagoras's theorem in moving pictures
Circumnavigating Platonic polyhedra
Map projections of the world
Henry Bottomley January 2002 | http://www.se16.info/js/halfarea.htm | 13 |
52 | In geology and other related disciplines, seismic noise is a generic name for a relatively persistent vibration of the ground, due to a multitude of causes, that is a non-interpretable or unwanted component of signals recorded by seismometers.
Physically, seismic noise consists mostly of surface waves. Low frequency waves (below 1 Hz) are generally called microseisms; high frequency waves (above 1 Hz) are called microtremors. Its causes include nearby human activities (such as traffic or heavy machinery), winds and other atmospheric phenomena, and ocean waves.
Seismic noise is relevant to any discipline that depends on seismology, such as geology, oil exploration, hydrology, and earthquake engineering, and structural health monitoring. It is often called ambient wavefield or ambient vibrations in those disciplines. (However, the latter term may also refer to vibrations transmitted through by air, building, or supporting structures.)
Seismic noise is a nuisance for activities that are sensitive to vibrations, such as accurate measurements, precision milling, telescopes, and crystal growing. On the other hand, seismic noise does have some practical uses, for example to determine the low-strain dynamic properties of civil-engineering structures, such as bridges, buildings, and dams; or to determine the elastic properties of the soil and subsoil in order to draw seismic microzonation maps showing the predicted ground response to earthquakes.
Research on the origin of seismic noise indicates that the low frequency part of the spectrum (below 1 Hz) is due to natural causes, chiefly ocean waves. In particular the peak between 0.1 and 0.3 Hz is clearly associated with the interaction of water waves of nearly equal frequencies but opposite directions. At high frequency (above 1 Hz), seismic noise is mainly produced by human activities such as road traffic and industrial work; but there are also natural sources, like rivers. Around 1 Hz, wind and other atmospheric phenomena are also a major source of ground vibrations.
Physical characteristics
The seismic noise includes a small amount of body waves (P- and S-waves), but surface waves (Love and Rayleigh waves) predominate. Theses waves are dispersive, meaning that their phase velocity varies with frequency (most generally, it decreases with increasing frequency). Since the dispersion curve (phase velocity or slowness as a function of frequency) is tightly related to the variations of the shear-wave velocity with depth in the different ground layers, it can be used as a non-invasive tool to investigate the underground structure.
Seismic noise has very low amplitude and cannot be felt by humans. Their amplitude was also too low to be recorded by the first seismometers at the end of 19th century. However, at that time, the famous Japanese seismologist Fusakichi Omori could already record ambient vibrations in buildings, where the amplitudes are magnified. He found their resonance frequencies and studied their evolution as a function of damage.
Applications to civil engineering
After the 1933 Long Beach earthquake in California, a large experiment campaign led by D. S. Carder in 1935 allowed to record and analyze ambient vibrations in more than 200 buildings. These data were used in the design codes to estimate resonance frequencies of buildings but the interest of the method went down until the 1950s. Interest on ambient vibrations in structures grew further, especially in California and Japan, thanks to the work of earthquake engineers, including G. Housner, D. Hudson, K. Kanai, T. Tanaka, and others.
Ambient vibrations were however supplanted - at least for some time - by forced vibration techniques that allow to increase the amplitudes and control the shaking source and their system identification methods. Even though M. Trifunac showed in 1972 that ambient and forced vibrations led to the same results, the interest in ambient vibration techniques only rose in the late 1990s. They have now become quite attractive, due to their relatively low cost and convenience, and to the recent improvements in recording equipment and computation methods. The results of their low-strain dynamic probing were shown to be close enough to the dynamic characteristics measured under strong shaking, at least as long as the buildings are not severely damages.
Scientific study and applications in geology
The recording of seismic noise directly from the ground started in the 1950s with the enhancement of seismometers to monitor nuclear tests and the development of seismic arrays. The main contributions at that time for the analysis of these recordings came from the Japanese seismologist K. Aki in 1957. He proposed several methods used today for local seismic evaluation, such as Spatial Autocorrelation (SPAC), Frequency-wavenumber (FK), and correlation. However, the practical implementation of these methods was not possible at that time because of the low precision of clocks in seismic stations.
Again, improvements in instrumentation and algorithms led to renewed interest on those methods in the 1990s. Y.Nakamura rediscovered in 1989 the Horizontal to Vertical Spectral Ratio (H/V) method to derive the resonance frequency of sites. Assuming that shear waves dominate the microtremor, Nakamura observed that the H/V spectral ratio of ambient vibrations was roughly equal to the S-wave transfer function between the ground surface and the bedrock at a site. (However, this assumption has been questioned by the SESAME project.)
In the late 1990s, array methods applied to seismic noise data started to yield ground properties in terms of shear waves velocity profiles. The European Research project SESAME (2004–2006) worked to standardize the use of seismic noise to estimate the amplification of earthquakes by local ground characteristics.
Current use of ambient vibrations
||This section needs additional citations for verification. (December 2010)|
Characterization of the ground properties
The analysis of the ambient vibrations leads to different products used to characterize the ground properties. From the easiest to the most complicated, these products are: power spectra, H/V peak, dispersion curves and autocorrelation functions.
- Computation of power spectra, e.g. Passive seismic.
- HVSR (H/V spectral ratio): The H/V technique is especially related to ambient vibration recordings. Bonnefoy-Claudet et al. showed that peaks in the horizontal to vertical spectral ratios can be linked to the Rayleigh ellipticity peak, the Airy phase of the Love waves and/or the SH resonance frequencies depending on the proportion of these different types of waves in the ambient noise. By chance, all these values give however approximately the same value for a given ground so that H/V peak is a reliable method to estimate the resonance frequency of the sites. For 1 sediment layer on the bedrock, this value f0 is related to the velocity of S-waves Vs and the depth of the sediments H following: . It can therefore be used to map the bedrock depth knowing the S-wave velocity. This frequency peak allows to constrain the possible models obtain using other seismic methods but is not enough to derive a complete ground model. Moreover, it has been shown that the amplitude of the H/V peak was not related to the magnitude of the amplification.
Array methods: Using an array of seismic sensors recording simultaneously the ambient vibrations allow to understand more deeply the wavefield and therefore to derive more properties of the ground. Due to the limitation of the available number of sensors, several arrays of different sizes may be realized and the results merged. The information of the Vertical components is only linked to the Rayleigh waves, and therefore easier to interpret, but method using the 3 space components are also developed, providing informations about Rayleigh and Love wavefield.
- FK, HRFK using the Beamforming technique
- SPAC (Spatial Auto-correlation) method
- Correlations methods
- Refraction microtremor ReMI
Characterization of the vibration properties of civil engineering structures
Like earthquakes, ambient vibrations force into vibrations the civil engineering structures like bridges, buildings or dams. This vibration source is supposed by the greatest part of the used methods to be a white noise, i.e. with a flat noise spectrum so that the recorded system response is actually characteristic of the system itself. The vibrations are perceptible by humans only in rare cases (bridges, high buildings). Ambient vibrations of buildings are also caused by wind and internal sources (machines, pedestrians...) but these sources are generally not used to characterize structures. The branch that studies the modal properties of systems under ambient vibrations is called Operational modal analysis (OMA) or Output-only modal analysis and provides many useful methods for civil engineering. The observed vibration properties of structures integrate all the complexity of these structures including the load-bearing system, heavy and stiff non-structural elements (infill masonry panels...), light non-structural elements (windows...) and the interaction with the soil (the building foundation may not be perfectly fixed on the ground and differential motions may happen). This is emphasized because it is difficult to produce models able to be compared with these measurements.
Single-station methods: The power spectrum computation of ambient vibration recordings in a structure (e.g. at the top floor of a building for larger amplitudes) gives an estimation of its resonance frequencies and eventually its damping ratio.
Transfer function method: Assuming ground ambient vibrations is the excitation source of a structure, for instance a building, the Transfer Function between the bottom and the top allows to remove the effects of a non-white input. This may particularly be useful for low signal-to-noise ratio signals (small building/high level of ground vibrations). However this method generally is not able to remove the effect of soil-structure interaction.
Arrays: They consist in the simultaneous recording in several points of a structure. The objective is to obtain the modal parameters of structures: resonance frequencies, damping ratios and modal shapes for the whole structure. Notice than without knowing the input loading, the participation factors of these modes cannot a priori be retrieved. Using a common reference sensor, results for different arrays can be merged.
- Methods based on correlations
Several methods use the power spectral density matrices of simultaneous recordings, i.e. the cross-correlation matrices of these recordings in the Fourier domain. They allow to extract the operational modal parameters (Peak Picking method) that can be the results of modes coupling or the system modal parameters (Frequency Domain Decomposition method).
- System identification methods
Numerous system identification methods exist in the literature to extract the system properties and can be applied to ambient vibrations in structures
Inversion/Model updating/multi-model approach
The obtained results cannot directly give information on the physical parameters (S-wave velocity, structural stiffness...) of the ground structures or civil engineering structures. Therefore models are needed to compute these products (dispersion curve, modal shapes...) that could be compared with the experimental data. Computing a lot of models to find which agree with the data is solving the Inverse problem. The main issue of inversion is to well explore the parameter space with a limited number of computations of the model. However, the model fitting best the data is not the most interesting because parameter compensation, uncertainties on both models and data make many models with different input parameters as good compared to the data. The sensitivity of the parameters may also be very different depending on the model used. The inversion process is generally the weak point of these ambient vibration methods.
Material needed
The acquisition chain is mainly made of a seismic sensor and a digitizer. The number of seismic stations depends on the method, from single point (spectrum, HVSR) to arrays (3 sensors and more). Three components (3C) sensors are used except in particular applications. The sensor sensitivity and corner frequency depend also on the application. For ground measurements, velocimeters are necessary since the amplitudes are generally lower than the accelerometers sensitivity, especially at low frequency. Their corner frequency depends on the frequency range of interest but corner frequencies lower than 0.2 Hz are generally used. Geophones (generally 4.5 Hz corner frequency or greater) are generally not suited. For measurements in civil engineering structures, the amplitude is generally higher as well as the frequencies of interest, allowing the use of accelerometers or velocimeters with a higher corner frequency. However, since recording points on the ground may also be of interest in such experiments, sensitive instruments may be needed. Except for single station measurements, a common time stamping is necessary for all the stations. This can be achieved by GPS clock, common start signal using a remote control or the use of a single digitizer allowing the recording of several sensors. The relative location of the recording points is needed more or less precisely for the different techniques, requiring either manual distance measurements or differential GPS location.
Advantages and limitations
- Relatively cheap, non-invasive and non-destructive method
- Applicable to urban environment
- Provide valuable information with little data (e.g. HVSR)
- Dispersion curve of Rayleigh wave relatively easy to retrieve
- Provide reliable estimates of Vs30
Limitations of these methods are linked to the noise wavefield but especially to common assumptions made in seismic:
- Penetration depth depends on the array size but also on the noise quality, resolution and aliasing limits depend on the array geometry
- Complexity of the wavefield (Rayleigh, Love waves, interpretation of higher modes...)
- Plane wave assumption for most of the array methods (problem of sources within the array)
- 1D assumption of the underground structure, even though 2D was also undertaken
- Inverse problem difficult to solve as for many geophysical methods
- S. Bonnefoy-Claudet, F. Cotton, and P.-Y. Bard (2006), The nature of noise wavefield and its applications for site effects studies. A literature review. Earth Science Review, volume 79, pages 205–227.
- M. S. Longuet-Higgins (1950). A theory of the origin of microseisms. Philosophical Transactions of the Royal Society of London, Series A, volume 243, pages 1–35.
- K. Hasselmann (1963), A statistical analysis of the generation of micro-seisms. Review of Geophysics, volume 1, issue 2, pages 177–210.
- S. Kedar, M. Longuet-Higgins, F. W. N. Graham, R. Clayton, and C. Jones (2008). The origin of deep ocean microseisms in the north Atlantic ocean. Proceedings of the Royal Society of London, series A, pages 1–35.
- F. Ardhuin, E. Stutzmann, M. Schimmel, and A. Mangeney (2011), Ocean wave sources of seismic noise. Journal of Geophysics Research, volume 115.
- Peterson (1993), Observation and modeling of seismic background noise. U.S. Geological Survey Technical Report 93-322, pages 1–95.
- C. Davison. Fusakichi Omori and his work on earthquakes. Bulletin of the Seismological Society of America, 14(4):240–255, 1924.
- D. S. Carder. Earthquake investigations in California, 1934-1935, chapter 5 Vibration observations, pages 49–106. Number Spec. Publ. n201. U.S. Coast and Geodetic Survey, 1936.
- Kanai, K., Tanaka, T., 1961. On microtremors VIII. Bulletin of the Earthquake Research Institute 39, 97–114
- M. Trifunac. Comparison between ambient and forced vibration experiments. Earthquake Engineering and Structural Dynamics, 1:133–150, 1972.
- Dunand, F., P. Gueguen, P.–Y. Bard, J. Rodgers and M. Celebi, 2006. Comparison Of The Dynamic Parameters Extracted From Weak, Moderate And Strong Motion Recorded In Buildings. First European Conference on Earthquake Engineering and Seismology (a joint event of the 13th ECEE & 30th General Assembly of the ESC) Geneva, Switzerland, 3–8 September 2006, Paper #1021
- Aki, K. (1957). Space and time spectra of stationary stochastic waves, with special reference to microtremors, Bull. Earthquake Res. Inst. 35, 415–457.
- Nakamura A Method for Dynamic Characteristic Estimation of SubSurface using Microtremor on the Ground Surface. Q Rep Railway Tech Res Inst 1989;30(1):25–33.
- Matshushima, T., and H. Okada, 1990. Determination of deep geological structures under urban areas using long-period microtremors, BUTSURI-TANSA, 43-1, p. 21-33.
- Milana, G., S. Barba, E. Del Pezzo, and E. Zambonelli, 1996. Site response from ambient noise measurements: new perspectives from an array study in Central Italy, Bull. seism. Soc. Am., 86-2, 320-328.
- Tokimatsu, K. , H. Arai, and Y. Asaka, 1996. Three-dimensional soil profiling in Kobé area using miccrotremors, Xth World Conf. Earthq. Engng., Acapulco, # 1486, Elsevier Science Ltd.
- Chouet, B., G. De Luca, G. Milana, P. Dawson, M. Martini and R. Scarpa, 1998. Shallow velocity structure of Stromboli Volcano, Italy, derived from small-aperture array measurements of strombolian tremor, Bull. seism. Soc. Am., 88-3, 653-666.
- Bonnefoy-Claudet, S., C. Cornou, P.-Y. Bard, F. Cotton, P. Moczo, J. Kristek and D. Fäh, 2006. H/V ratio: a tool for site effects evaluation. Results from 1D noise simulations. Geophys. J. Int, 167, 827-837.
- Haghshenas, E., P.-Y. Bard, N. Theodulidis and SESAME WP04 Team, 2008. Empirical evaluation of microtremor H/V spectral ratio, Bulletin of Earthquake Engineering, 6-1, pp. 75-108, Feb. 2008.
- Hans S, Boutin C, Ibraim E, Roussillon P. In situ experiments and seismic analysis of existing buildings—Part I: experimental investigations. Earthquake Engineering and Structural Dynamics 2005; 34(12):1513–1529
- M.I. Todorovska, “Seismic interferometry of a soil-structure interaction model with coupled horizontal and rocking response,” Bulletin of the Seismological Society of America, vol. 99, no. 2A, pp. 611–625, April 2009
- Roten, D.; Fäh, D. (2007). "A combined inversion of Rayleigh wave dispersion and 2-D resonance frequencies". Geophysical Journal International (Wiley) 168 (3): 1261–1275. | http://en.wikipedia.org/wiki/Ambient_Vibrations | 13 |
68 | Bird Biogeography II
Tropical rain forests (areas that receive more than 100 mm of rain/month) support the richest avifaunas in the world. Why are there so many bird species in rain forests & how (& why) do the avifaunas of rain forests in different parts of the world vary? To answer these questions requires a look at the history of rain forests & their avifaunas (Karr 1990):
Off the coast of South America, the oceanic Nazca Plate is pushing into & being subducted under the continental part of the South American Plate. In turn, the overriding South American Plate is being lifted up, creating the Andes mountains.
The convergence of the Nazca and South American Plates
has deformed and pushed up limestone strata to form the towering peaks
of the Andes, as seen here in Peru
Tectonic plate collision
|Avian species richness -- Numerous hypotheses
have been proposed to explain regional variability in species richness,
and recent research efforts have winnowed the number of potential hypotheses
to a credible few: (1) energy availability, (2) evolutionary time, (3)
habitat heterogeneity, (4) area, and (5) geometric constraints. Rahbek
and Graves (2001) examined bird diversity in South America and found
that the 1° quadrats (map a to the right) that exhibited the highest
avian diversity (>650 species) were restricted to Andean Ecuador (peaking
at 845 species) and in southeastern Peru (peaking at 782 species) and southern
Bolivia (peaking at 698 species). These quadrats were physiographically
complex (range = 1,700-5,700 m) and characterized by moderate precipitation
(1,058-3,096 mm/yr) and maximum daily temperatures (16.9-25.3°C). Thus,
neither area nor energy alone is sufficient to explain patterns of avian
species richness in South America. If energy and biome area were the primary
determinants of species richness, then species richness would be highest
in central Amazonia, which was not the case.
Species richness in neotropical birds seems to be linked directly to habitat diversity, which is correlated with topographic heterogeneity. The number of different ecosystems found in 1° quadrats was correlated with topographic relief at tropical latitudes (<20°). Quadrats in the species-poor zone in central Amazonia overlapped 5-16 distinctive ecosystems, whereas species-rich quadrats (>650 species) overlapped 16-24 ecosystems.
The extraordinary abundance of species associated with humid montane regions at equatorial latitudes reflects the overwhelming influence of orography and climate on the generation and maintenance of species richness. Rahbek and Graves' (2001) data reinforce the hypothesis that terrestrial species richness from the equator to the poles is governed by a synergism between climate and coarse-scale topographic heterogeneity.
water birds (Aves) of South America compiled at 1° × 1°, 3° × 3°, 5° × 5°,
and 10° × 10° scales. Note the loss of information and the spurious
extrapolation of high species densities in species-poor localities at
coarser spatial scales (From: Rahbek & Graves 2001).
Geographic variation in the richness of breeding terrestrial bird species in South America (Hawkins et al. 2003).
Global maps of avian species richness at three sampling resolutions. An equal-area (Behrmann) map projection was used,
with grid cells having longitudinal cell resolutions of: (a) 1°; (b) 2° and (c) 4°.
Topography, energy and the global distribution of bird species richness -- A major goal of ecology is to determine the causes of the latitudinal gradient in global distribution of species richness. Current evidence points to either energy availability or habitat heterogeneity as the most likely environmental drivers in terrestrial systems, but their relative importance is controversial in the absence of analyses of global (rather than continental or regional) extent. Davies et al. (2007) used data on the global distribution of extant continental and continental island bird species to test the explanatory power of energy availability and habitat heterogeneity while simultaneously addressing issues of spatial resolution, spatial autocorrelation, geometric constraints upon species' range dynamics, and the impact of human populations and historical glacial ice-cover. Global maps of avian species richness data used in their analyses showed a consistent pattern across all three sampling resolutions (see Figure above), namely higher species richness within the tropics, with peaks coinciding with major mountain chains, most notably along the Andes and the southern slopes of the Himalayas and to a lesser extent the African Rift Valley. The best-fit multi-predictor global model accounting for glacial history and human impacts showed elevation range to be the strongest predictor of avian species richness, closely followed by temperature and then habitat diversity and productive energy. This global perspective confirms the primary importance of mountain ranges in high-energy areas.
A palaeobiogeographic model for terrestrial environments of Amazonia for the last 3 million years based on the evolutionary history of trumpeters (Psophia) and geological data. Rivers and their tributaries are depicted as in the present, but their palaeopositions may have differed. (a) About 3.0–2.7 million years ago (Ma): western lowland Amazonia is a large interconnected wetland/lake/river system. (b) About 2.7–2.0 Ma: the wetland system drained significantly and the lower Amazon River, which isolated northern and southern populations of Psophia, was established. As terra firme forests developed, populations expanded westward. (c) About 2.0–1.0 Ma: the Rio Madeira drainage was established, promoting speciation of P. leucoptera (Inambari area). (d) About 1.3–0.8 Ma: the Rio Tapajós drainage system developed, resulting in the differentiation of P. viridis (Rondônia area). At the same time, a barrier (red bar) is postulated to have isolated the Napo area of endemism within which P. napensis differentiated. (e) About 1.0–0.7 Ma: an isolating barrier associated with the lower Rio Negro formed, giving rise to P. ochroptera (Negro area) and P. crepitans (Guianan area). (f) About 0.8–0.3 Ma: two drainage systems on the Brazilian Shield, the Rio Tocantins and the Rio Xingu, were established as isolating barriers, creating three areas of endemism and their endemic species. AB, Amazon Basin; SB, Solimões Basin; P, location of Purus Arch.
River dynamics and Amazonia biodiversity -- Many hypotheses have been proposed to explain high species diversity in Amazonia, but few generalizations have emerged. In part, this has arisen from the scarcity of rigorous tests for mechanisms promoting speciation, and from major uncertainties about palaeogeographic events and their spatial and temporal associations with diversification. Ribas et al. (2011) examined the environmental history of Amazonia using a phylogenetic and biogeographic analysis of trumpeters (Aves: Psophia), which are represented by species in each of the vertebrate areas of endemism. Their relationships reveal an unforeseen ‘complete’ time-slice of Amazonian diversification over the past 3 million years. A temporally calibrated phylogeny was employed to test competing palaeogeographic hypotheses. Results were consistent with the establishment of the current Amazonian drainage system at approximately 3.0–2.0 million years ago and predict the temporal pattern of major river formation over Plio-Pleistocene times. Ribas et al. (2011) propose a palaeobiogeographic model for the last 3.0 Myr of Amazonian history that has implications for understanding patterns of endemism, the temporal history of Amazonian diversification and mechanisms promoting speciation. The history of Psophia, in combination with new geological evidence, provides the strongest direct evidence supporting a role for river dynamics in Amazonian diversification, and the absence of such a role for glacial climate cycles and refugia.
As a result of this history (& other factors described below):
The Amazon Rainforest - Gone Within Our Children's Lifetime?
Bird guilds and ecological specialization
(light bars for the tropics, dark bars for the temperate zone)
|Arboreal dead-leaf-searching birds of the Neotropics -- Remsen and Parker (1984) reported that at least 11 species of birds in northern Bolivia and southern Peru are dead-leaf-searching “specialists”: more than 75% of their foraging observations of these species involved individuals searching for insects in dead, curled leaves suspended above ground in the vegetation. All known specialists of this kind belong to the families Furnariidae and Formicariidae. An additional six species exhibited dead-leaf-searching behavior in 25% to 75% of their foraging records. The number of specialists and regular users decreases with rising elevation in the Andes. Specialists disappear from the gradient between 2,000 m and 2,575 m, but regular users occur as high as 3,300 m, near timberline. As many as eight species of dead-leaf-searching specialists coexist in western Amazonia.||
Photo by Arthur Grosset
Why, in general, does bird species diversity increase from the poles
to the equator?
Breeding bird species in North and Central America
A number of possible explanations for these latitudinal diversity gradients have been suggested (Wiens 1991):
|Tropical Conservatism Hypothesis
-- Wiens and Donoghue (2004) have proposed a 'tropical conservatism hypothesis'
to help explain the the tendency for species richness to increase from
poles to equator. This hypothesis or model combines three basic ideas:
• Many groups of organism that have high tropical species richness originated in the tropics & have spread to temperate regions either more recently or not at all. If a clade originated in the tropics then (all other things being equal) it should have more tropical species because of the greater time available for speciation in tropical regions to occur (i.e. the time-for-speciation effect).
• One reason that many extant clades of organism originated in the tropics is that tropical regions had a greater geographical extent until relatively recently (30–40 million years ago, when temperate zones increased in size). If much of the world was tropical for a long period before the present, then (all other things being equal) more extant clades should have originated in the tropics than in temperate regions.
• Many species and clades are specialized for tropical climates, and the adaptations necessary to invade and persist in regions that experience freezing temperatures have evolved in only some. Tropical niche conservatism has helped maintain the disparity in species richness over time.
At least two lines of evidence support this tropical conservatism hypothesis. First, many groups of organisms that show the expected gradient in species richness also appear to show the predicted pattern of historical biogeography, with an origin in the tropics and more recent dispersal to temperate regions. For example, analyses of New World birds reveal older average divergences among tropical taxa than among temperate ones, as predicted by this hypothesis.
Second, many distantly related groups show similar northern range limits, in spite of the lack of an obvious geographical barrier, suggesting that cold climate and niche conservatism act as barriers to the invasion of temperate zones by tropical clades. Thus, many neotropical clades currently have their northern range limits in the tropical lowlands of Mexico (e.g., tinamous), whereas many other groups have their northern range limits in southern China and Vietnam (e.g., broadbills). These regions of biotic turnover in Mexico and Asia have not gone unnoticed; in fact, they correspond to borders between the global zoogeographical realms recognized by Wallace. Further, many of the groups involved are old, suggesting that there has been ample time for invasion of temperate regions, but that their northward dispersion was limited by their inability to adapt to colder climates.
Two approaches to the problem of explaining global patterns of species richness. Standard ecological approaches (a) seek correlations between the numbers of species of a given group at a given location (numbers along the edge of the globe) & environmental variables (e.g., temperature, indicated here by different shades of red). By contrast (b), Wiens & Donoghue (2004) advocate considering the biogeographical history of the species and clades that makes up these differences in species richness between regions, & understanding how ecology, phylogeny & microevolution (e.g. adaptation) have combined to shape that biogeographical history. Each dot in the above diagram represents a species & its geographical location, and the lines connecting them represent both their evolutionary relationships and the simplified paths of dispersal. (b) also illustrates the tropical conservatism hypothesis, i.e., there are more species in tropical regions because most groups originated in the tropics & are specialized for a tropical climatic regime, that most species and clades have been unable to disperse out of the tropics (because of niche conservatism), and that the greater time and area available or speciation in the tropics has led to higher species richness in the tropics for most taxa. As shown here, the tropical conservatism hypothesis predicts that temperate lineages are often recently derived from clades in tropical regions, leading to (on average) shallower phylogenetic divergences among temperate lineages than among tropical lineages. Although not illustrated here, an important part of the tropical conservatism
hypothesis is the idea that tropical regions were more extensive until the mid-Tertiary, which might help explain the greater number of extant clades originating in these areas.
Some terrestrial birds can be found almost everywhere. For example, Ospreys, Barn Owls, & swifts occupy every continent except Antarctica. However, although most can fly, most species of birds do not occur everywhere on earth where conditions appear to be favorable for them. Barriers to dispersal for birds include (Welty & Baptista 1988):
Species-richness maps of (a) occasional/non-following species that are sister to ant-following clades, (b) obligate
army-ant-followers [with distributions of Myrmeciza fortis (obligate) and M. melanoceps (occasional) outlined],
(c) regular army-ant-followers (thamnophilid and furnariid species combined), and (d) regular thamnophilid army-ant-followers.
Army-ant-following birds -- One of the most novel foraging strategies in Neotropical birds is army-ant-following, in which birds prey upon arthropods and small vertebrates flushed from the forest floor by swarm raids of the army-ant Eciton burchellii. This specialization is most developed in the typical antbirds (Thamnophilidae) which are divisible into three specialization categories: (1) those that forage at swarms opportunistically as army-ants move through their territories (occasional followers), (2) those that follow swarms beyond their territories but also forage independently of swarms (regular followers), and (3) those that appear incapable of foraging independently of swarms (obligate followers). Brumfield et al. (2007) found that regular following evolved only three times, and that the most likely evolutionary progression was from least (occasional) to more (regular) to most (obligate) specialized, with no reversals from the obligate state. Despite the dependence of the specialists on a single ant species, molecular dating indicates that army-ant-following has persisted in antbirds since the late Miocene.
Ant-following species density maps (above) illustrate clear differences between obligate and regular followers. Obligate followers are found almost exclusively in Amazonia, with increasing densities moving west toward the Andes. This same pattern is observed in occasional/non-followers. In contrast, regular followers were most abundant where obligates were least prevalent or absent (Atlantic coastal forests of Brazil, the Guyanan shield, and the forested lowlands west of the Andes). These results suggest than competition could have played a role in the evolution of army-ant-following species and in shaping their distribution.
Migration & Bird Biogeography
The avifauna of most regions (Nearctic, Palearctic, Neotropical, Ethiopian, & Oriental) includes numerous migratory species; species that spend only part of the year in those regions. About 400 species have breeding ranges in the Holarctic and winter in the tropics (primarily Central & South America, Africa, & Indomalaysia)(Lovei 1989).
What is the ancestral 'home' of migrants? For example, are most migrants in this hemisphere Nearctic species that move south to avoid winter conditions or Neotropical species that move north to take advantage of temporarily favorable conditions for breeding? Evidence suggests that many northern latitude migrants are tropical birds exploiting the long days and abundant insects of high-latitude summers rather than temperate birds escaping northern winters (Levey and Stiles 1992).
The Great Migration
|Evolution of migratory behavior -- Migratory behavior can evolve when a resident species expands its range due to intraspecific competition into an area that is seasonally variable, providing greater resources for reproduction but harsher climactic stress and reduced food availability in the non-breeding season. Individuals breeding in these new regions at the fringe of the species' distribution are more productive, but to increase non-breeding survival they return to the ancestral range. This results, however, in even greater intraspecific competition because of their higher productivity, so that survival is enhanced for individuals that winter in areas not inhabited by the resident population. The Common Yellowthroat (Geothlypis trichas) of the Atlantic coast of the U.S. is a good example. Birds occupying the most southern part of the species' range in Florida are largely nonmigratory, whereas populations that breed as far north as Newfoundland migrate to the West Indies in the winter, well removed from the resident population in Florida. Because a migrant population gains an advantage on both its breeding and wintering range, it becomes more abundant, while the resident, non-migratory population becomes proportionately smaller and smaller in numbers. If changing environmental conditions become increasingly disadvantageous for the resident population or interspecific competition becomes more severe, the resident population could eventually disappear, leaving the migrant population as characteristic of the species. These stages in the evolution of migration are represented today by permanent resident populations, partial migrants, and fully migratory species. As for all adaptations, natural selection continues to mold and modify the migratory behavior of birds as environmental conditions perpetually change and species expand or retract their geographic ranges. Hence, the migratory patterns that we observe today will not be the migratory patterns of the future (From: Lincoln et al. 1998).|
Checklist of things a migrant bird has to consider before departing on a long-distance flight (Piersma et al. 1990, Klaasen 1996).
|Semipalmated Sandpipers (Calidris pusilla) stop in the Bay of Fundy (New Brunswick, Canada) during their fall migration from breeding areas in the Arctic. This crucial stopover allows the sandpipers to store large lipid reserves by eating seasonally abundant amphipods (Corophium volutator) buried in the mudflats. An estimated 75% (about 1 million individuals) of the world population of Semipalmated Sandpipers stops at this location. The Corophium mudshrimp contains unusually large amounts of n-3 polyunsaturated fatty aids (45% of total lipids) and is found only in the Bay of Fundy and along the coast of Maine. Dietary n-3 fatty acids are not only used as an energy source, but they also act as performance-enhancing substances to increase the capacity for endurance exercise, just before the sandpipers cross the Atlantic ocean to South America. The remainder of the annual migration cycle appears to be achieved through multiple short flights over land and along coastal areas. Pollution and rising sea levels caused by global warming are major threats to strategic stopover sites such as the Bay of Fundy and to the future of these sandpipers (Weber 2009).|
Sense of direction -- Cochran et al. (2004) found that migrating thrushes rely on a built-in magnetic compass that they recalibrate each evening based on the direction of the setting sun. The research, that involved attaching radio transmitters to birds and following them by truck is the first extensive study of bird navigation in the wild. The results appear to resolve conflicts between earlier laboratory-based studies, which had identified several possible navigational mechanisms, but produced no consensus. Previous theories suggested that birds use some combination of magnetism, stars, landmarks, smells and other mechanisms as navigational aides. Cochran et al. (2004) caught birds just before they left for their overnight migratory flights and placed them in an artificial magnetic field. They then released the birds and followed them throughout the night. Birds that had been in the artificial magnetic field flew in the wrong direction, but recovered their orientation the next night.
"In the morning, shortly before they land, they see the sun and realize they have made a mistake," said Martin Wikelski, a co-author. "You can see them turn around 90 degrees." Cochran et al. (2004) concluded that birds rely on the location of the sunset to determine which way to fly. To maintain that heading throughout the night, they sense the Earth's magnetic field, just like a pilot uses a compass at night or in bad weather. "It is the simplest and most foolproof orientation mechanism we can imagine," Wikelski said. The combination of cues sun and magnetic field neatly correct each other for possible problems. Migrating at night, birds cannot maintain a fix on the sun and cannot rely on seeing stars because of clouds. A bird's magnetic compass also is not sufficient. The location of the magnetic north pole the spot on the globe where a compass points is not stable (it is currently in Canada, hundreds of miles from the geographic north pole). The magnetic poles also completely reverse locations every few thousand years, so the north arrow on a compass would suddenly point south. It makes sense then that birds use the magnetic field only as a guide to keep them on a path that is determined primarily by the sun, Wikelski said. It doesn't matter which way the magnetic field points, so long as it stays steady through the night.
In one set of experiments, the researchers tracked Gray-cheeked Thrushes, which all tend to migrate in the same direction. Seven of eight birds exposed to an artificial magnetic field flew in a significantly different direction than 14 that were not. The researchers did similar experiments with Swainson's Thrushes (pictured above), whose headings vary considerably from one bird to another. They exposed the Swainson's thrushes to artificial magnetic fields and followed them for at least two days. All the treated birds picked a significantly different direction on their second day, whereas unmanipulated birds kept flying the same direction both days. Wikelski believes the results, while based on just two species, are likely to apply to most migratory birds. "It's such a simple and elegant mechanism that I would say it is widespread," he said. -- Princeton Weekly Bulletin
Greater noctule bat
Bats vs. migrating Palearctic passerines -- Along food chains, i.e., at different trophic levels, the most abundant taxa often represent exceptional food reservoirs, and are hence the main target of consumers and predators. The capacity of an individual consumer to opportunistically switch towards an abundant food source, for instance, a prey that suddenly becomes available in its environment, may offer such strong selective advantages that ecological innovations may appear and spread rapidly. New predator-prey relationships are likely to evolve even faster when a diet switch involves the exploitation of an unsaturated resource for which few or no other species compete. Using stable isotopes of carbon and nitrogen as dietary tracers, Popa-Lisseanu et al. (2007) reported support for the controversial hypothesis that the greater noctule bat (Nyctalus lasiopterus), a rare Mediterranean aerial-hawking bat, feeds on the wing upon the multitude of flying passerines during their nocturnal migratory journeys, a resource that, although showing a predictable distribution in space and time, is only seasonally available. No predator had previously been reported to exploit this extraordinarily diverse and abundant food reservoir represented by nocturnally migrating passerines.
Shearwater migrations originating from breeding colonies in New Zealand. (a) Interpolated geolocation tracks of 19 Sooty Shearwaters
during breeding (light blue) and subsequent migration pathways (yellow, start of migration and northward transit; orange, wintering grounds and
southward transit). The 30° parallels, equator, and international dateline are indicted by dashed lines. (b–d) Representative figure-eight movement
patterns of individual shearwaters traveling to one of three "winter" destinations in the North Pacific. These tracks also represent those of three breeding
pairs to reveal the dispersion and extent of each pair.
Electronic tracking tags have revolutionized our understanding of broad-scale movements and habitat use of highly mobile marine animals, but a large gap in our knowledge still remains for a wide range of small species. Shaffer et al. (2006) reported the extraordinary transequatorial postbreeding migrations of a small seabird, the Sooty Shearwater (Puffinus griseus), obtained with miniature archival tags that log data for estimating position, dive depth, and ambient temperature. Tracks (262 ± 23 days) revealed that shearwaters fly across the entire Pacific Ocean in a figure-eight pattern while traveling 64,037 ± 9,779 km roundtrip, the longest animal migration ever recorded electronically. Each shearwater made a prolonged stopover in one of three discrete regions off Japan, Alaska, or California before returning to New Zealand through a relatively narrow corridor in the central Pacific Ocean. Transit rates as high as 910 ± 186 km·day–1 were recorded, and shearwaters accessed prey resources in both the Northern and Southern Hemisphere’s most productive waters from the surface to 68.2 m depth. Their results indicate that Sooty Shearwaters integrate oceanic resources throughout the Pacific Basin on a yearly scale. Sooty Shearwater populations today are declining, and because they operate on a global scale, they may serve as an important indicator of climate change and ocean health.
Recent changes in Bird Biogeography - Extinction:
Source: American Bird Conservancy
Threats (historical & current) that have been the primary causes of habitat loss in U.S. threatened habitats.
Threats that have only affected one each of the 20 most threatened habitats are combined as 'other',
and include forest succession, coastal engineering, deer, fisheries issues, recreation, and fire
(Source: American Bird Conservancy).
Declining bird populations
Tourism and Lesser Prairie Chicken conservation
In October 1993, William Lishman took off from his Toronto farm in a microlight of his own design and construction.
Eighteen birds followed 'Goose Leader' in a V-formation all the way south to Virginia. The following April the banded
flock returned unaided to Lishman's farm to greet their surrogate father. Eventually the principles learned by working
with Canada Geese and Sandhill Cranes were applied to endangered species such as Whooping Cranes and Trumpeter Swans.
Estimated global numbers of individual birds (in billions) over the past
several hundred years, based on low (bottom), medium (middle), and high (top)
densities, beginning with the pre-agricultural pattern of land use (Gaston et al. 2003).
Habitat conversion and global avian biodiversity loss -- The magnitude of the impacts of human activities on global biodiversity has been documented at several organizational levels. However, although there have been numerous studies of the effects of local-scale changes in land use (e.g. logging) on the abundance of groups of organisms, broader continental or global-scale analyses addressing the same basic issues remain largely wanting. Nonetheless, changing patterns of land use, associated with the appropriation of increasing proportions of net primary productivity by the human population, seem likely not simply to have reduced the diversity of life, but also to have reduced the carrying capacity of the environment in terms of the numbers of other organisms that it can sustain. Gaston et al. (2003) estimated the size of the existing global breeding bird population, and made a first approximation as to how much this has been modified as a consequence of land-use changes wrought by human activities. Summing numbers across different land-use classes gives a best current estimate of a global population of about 87 billion breeding bird individuals (with about 25% of these birds in tropical forest, 13% in tropical woodland, 11% in boreal forest, 8% in savannah habitat, and 19% in human-modified habitats like cropland and pasture). Applying the same methodology to estimates of original land-use distributions suggests that conservatively this may represent a loss of between 20-25% of pre-agricultural bird numbers. This loss is shared across a range of temperate and tropical land-use types.
Ten percent of all bird species are likely to disappear by the year 2100, and another 15% could be on the brink of extinction. This dramatic loss will have a negative impact on forest ecosystems and agriculture worldwide. Sekercioglu et al. (2004) estimated that, by 2100, as many as one out of four may be functionally extinct—that is, critically endangered or extinct in the wild. "Even though only 1.3% of bird species have gone extinct since 1500, the global number of individual birds is estimated to have experienced a 20-25% reduction during the same period," wrote Sekercioglu et al. (2004). "Given the momentum of climate change, widespread habitat loss and increasing numbers of invasive species, avian declines and extinctions are predicted to continue unabated in the near future."
The study was based on an analysis of all 9,787 living and 129 extinct bird species. To forecast probable rates of extinction, the authors simulated best-case, intermediate-case and worst-case scenarios for the future:
"It's hard to imagine the disappearance of a bird species making much difference to human well-being," said co-author Daily. "Yet consider the case of the Passenger Pigeon. Its loss is thought to have made Lyme disease the huge problem it is today. When Passenger Pigeons were abundant—and they used to occur in unimaginably large flocks of hundreds of millions of birds—the acorns on which they specialized would have been too scarce to support large populations of deer mice, the main reservoir of Lyme disease, that thrive on them today."
The authors also found that numerous insect-eating species face extinction. "Exclusions of insectivorous birds from apple trees, coffee shrubs, oak trees and other plants have resulted in significant increases in insect pests and consequent plant damage," the authors wrote, adding that the extreme specializations of many insectivorous birds, especially in the tropics, make it unlikely that other organisms will be able to replace the birds' crucial role in controlling pests.
"The societal importance of ecosystem services is often appreciated only upon their loss," the authors wrote. "Disconcertingly, avian declines may in fact portray a best-case scenario, since fish, amphibians, reptiles and mammals are 1.7 to 2.5 times more threatened [than birds]." Invertebrates, which may be even more ecologically significant than animals, also are disappearing, they noted. Therefore, "investments in understanding and preventing declines in populations of birds and other organisms will pay off only while there is still time to act," the authors concluded. -- Stanford News Service
Major threats to globally threatened bird species
Amazon drought and deforestation
Extinction of native breeding birds since 1778. Steps mark the decade of the last record for each form considered extinct (A.O.U. 1983).
The 70 forms shown as currently existing include 13 in peril, with steps marking the decades of their last known records.
Yellow represents prehistoric forms. (Source: biology.usgs.gov/s+t/noframe/t017.htm)
Population trends for Guam birds as indicated by roadside surveys, 1976–1998 (Wiles et al. 2003).
|An endemic radiation of Malagasy songbirds -- The bird fauna of Madagascar includes a high proportion of endemic species, particularly among the passerines. The endemic genera of Malagasy songbirds are not allied obviously with any African or Asiatic taxa, and their affinities have been debated since the birds first were described. Cibois et al. (2001) used mitochondrial sequence data to estimate the relationships of 13 species of endemic Malagasy songbirds, 17 additional songbird species, and one species of suboscine passerine. Most previous classifications of these endemic Malagasy songbirds suggested colonization of Madagascar by at least three different lineages of forest-dwelling birds (babblers,bulbuls, & warblers), but the phylogeny of Cibois et al. (2001) suggests a single colonization event. They suggest that a single colonization seems more likely, based on the observation that successful colonization of islands by forest-restricted birds is rare. The avifauna of islands typically is comprised of habitat generalists, while nine species in the endemic Malagasy clade are forest dwelling. Overall, the endemic Malagasy songbird clade rivals other island radiations, including the vangas of Madagascar and the finches of the Galapagos, in ecological diversity.|
Why have so many endemic species been lost from islands? The main problems arise from exotic (introduced, non-native) plants, predators and herbivores. They have caused island extinctions in the following ways:
|Stephens Island Wren (Traversia lyalli) is only known from recent times from Stephen's Island, New Zealand, although it is common in fossil deposits from both of the main islands. The species was flightless and restricted to the rocky ground. Construction of a lighthouse on Stephens Island in 1894 led to the clearance of most of the island's forest, with predation by the lighthouse keeper's cat delivering the species' coup-de-grace.||
For birds, Easter Island’s remoteness and lack of predators made it an ideal haven as a breeding site, at least until humans arrived. Among the prodigious numbers of seabirds that bred there were albatross, boobies, frigate birds, fulmars, petrels, prions, shearwaters, storm petrels, terns, and tropic birds. With at least 25 nesting species, Easter Island was the richest seabird breeding site in Polynesia and probably in the whole Pacific. Pollen records show that destruction of Easter Island’s forests was well under way by the year 800, just a few centuries after the start of human settlement. Not long after 1400, the palm became extinct, and soon thereafter the forest itself. Its doom had been approaching as people cleared land to plant gardens; as they felled trees to build canoes, to transport and erect statues, and to burn; and probably as the native birds died out that had pollinated the trees’ flowers and dispersed their fruit. The destruction of the island’s animals was as extreme as that of the forest: without exception, every species of native land bird became extinct. The colonies of more than half of the seabird species breeding on Easter or on its offshore islets were wiped out. -- Jared Diamond
helps explain island populations' susceptibility to exotic diseases - Lindström et al. (2004) have found that Darwin's finches on smaller
islands in the Galapagos archipelago have weaker immune responses to disease
and foreign pathogens—findings that could help explain why island populations
worldwide are particularly susceptible to disease. Johannes Foufopoulos,
one of the co-authors, noted that "The introduction of exotic parasites
and diseases through travel, commerce and domestic animals and the resulting
destruction in native wildlife populations is a worldwide problem, but
it's even more serious for species that have evolved on islands. For example,
in the Hawaiian islands, many native bird species have gone extinct after
the introduction of avian malaria. The Galapagos authorities are now realizing
that the greatest danger to the islands' wildlife comes from exotic species,
such as invasive pathogens, accidentally introduced by humans." The investigators
found that larger islands with larger bird populations harbor more native
parasites and diseases, because the number of parasites is directly dependent
on the size of the population. Island size and parasite richness then influenced
the strength of the immune response of the hosts. By challenging the birds
immune systems with foreign proteins, they measured the average immune
response of each island population. Finches on smaller islands with fewer
parasites had a weaker immune response. For these birds, Foufopoulos said,
"maintaining a strong immune system is a little bit like house insurance:
you don't want to spend too much on an expensive policy if you live in
an area with no earthquakes, fires or floods." Similarly, if parasites
are scarce, the birds don't need to invest in an "expensive" immune system,
he said. When new parasites are then accidentally introduced by humans
to these islands, the birds are ill-prepared to resist infection.
See also: Darwin's Finches At Risk
1. Large cactus finch (Geospiza conirostris), 2. Large ground finch (G. magnirostris), 3. Medium ground finch (Geospiza fortis), 4. Cactus finch (G. scandens), 5. Sharp-beaked ground finch (G. difficilis), 6. Small ground finch (G. fuliginosa), 7. Woodpecker finch (Cactospiza pallida), 8. Vegetarian tree finch (Platyspiza crassirostris), 9. Medium tree finch (Camarhynchus pauper), 10. Large tree finch (Camarhynchus psittacula), 11. Small tree finch (C. parvulus), 12. Warbler finch (Certhidia olivacea), and 13. Mangrove finch (Cactospiza heliobates)
Recent changes in Bird Biogeography - Introductions:
|Big brains & novel environments -- The widely held hypothesis that enlarged brains have evolved as an adaptation to cope with novel or altered environmental conditions lacks firm empirical support. Sol et al. (2005) tested this hypothesis for birds by examining whether large-brained species show higher survival than small-brained species when introduced to nonnative locations. Using a global database documenting the outcome of >600 introduction events, they confirmed that avian species with larger brains, relative to their body mass, tend to be more successful at establishing themselves in novel environments. Moreover, Sol et al. (2005) provided evidence that larger brains help birds respond to novel conditions by enhancing their innovation propensity rather than indirectly through noncognitive mechanisms. These findings provide strong evidence for the hypothesis that enlarged brains function, and hence may have evolved, to deal with changes in the environment.|
Recent changes in Bird Biogeography - Range Extensions and Shifts in Response to Climate Change?:
Blue-gray Gnatcatcher population trends (1966-2003). Note the increase in the northern portion of its breeding range.
Inca Dove population trends (1966-2003).
Breeding distributions of North American birds moving north -- Geographic changes in species distributions toward traditionally cooler climes is one hypothesized indicator of recent global climate change. Hitch and Leberg (2007) examined distribution data on 56 bird species. If global warming is affecting species distributions across the temperate northern hemisphere, data should show the same northward range expansions of birds that have been reported for Great Britain. Because a northward shift of distributions might be due to multidirectional range expansions for multiple species, the possibility that birds with northern distributions may be expanding their ranges southward was also examined. There was no southward expansion of birds with a northern distribution, indicating that there is no evidence of overall range expansion of insectivorous and granivorous birds in North America. As predicted, the northern limit of birds with a southern distribution showed a significant shift northward (2.35 km/year). Among the species showing the greatest shift northward were Inca Doves, Fish Crows, Blue-winged Warblers, Hooded Warblers, Blue-gray Gnatcatchers, Black-billed Cuckoos, and Golden-winged Warblers. This northward shift is similar to that observed in previous work conducted in Great Britain: the widespread nature of this shift in species distributions over two distinct geographical regions and its coincidence with a period of global warming suggests a connection with global climate change.
Migratory restlessness in a non-migratory bird - The urge of captive birds to migrate manifests itself in seasonally occurring restlessness, termed “Zugunruhe.” Key insights into migration and an endogenous basis of behavior are based on Zugunruhe of migrants but have scarcely been tested in nonmigratory birds. Helm and Gwinner (2006) recorded Zugunruhe in African Stonechats, small passerine birds that defend year-round territories and that diverged from northern migrants at least 1 million years ago. Such results demonstrate that Zugunruhe is a regular feature of their endogenous program, and is precisely timed by photoperiod. Such programs could be activated when movements by non-migratory birds become necessary. Helm and Gwinner (2006) propose that low-level Zugunruhe may be common in birds, including residents, and could underlie recent rapid changes in movement and range patterns attributed to global change and other human interventions. Attention to Zugunruhe of resident birds promises new insights into diverse and dynamic migration systems and enhances predictions of avian responses to global change.
Stonechats (Saxicola torquata) were ideal subjects for this study, with two subspecies: a north temperate migrant (S. t. rubicola) and an equatorial resident (S. t. axillaris). They display persistent circannual rhythms of molt and reproduction under constant conditions. Migrants show distinct Zugunruhe, timed by precise photoperiodic programs. In contrast to northern obligatory migrants, stonechats from equatorial Kenya defended their breeding territories throughout the year. Genetic distances between the two disjunctly distributed subspecies are large.
|Birds and Global Warming
Scientists of the U.S. Geological Survey, in cooperation with Canadian scientists, conduct the annual North American Breeding Bird Survey, which provides distribution and abundance information for birds across the United States and Canada. From these data, collected by volunteers under strict guidance from the U.S. Geological Survey, shifts in bird ranges and abundances can be examined. Because these censuses were begun in the 1960's, these data can provide a wealth of baseline information. Price (1995) has used these data to examine the birds that breed in the Great Plains. By using the present-day ranges and abundances for each of the species (e.g., Bobolink, top map on the right), Price derived large-scale, empiricalstatistical models based on various climate variables (for example, maximum temperature in the hottest month and total precipitation in the wettest month) that provided estimates of the current bird ranges and abundances (middle map on the right). Then, by using a general circulation model to forecast how doubling of CO2 would affect the climate variables in the models, he applied the statistical models to predict the possible shape and location of the birds' ranges and abundances (bottom map on the right).
Significant changes were found for nearly all birds examined. The ranges of most species moved north, up mountain slopes, or both. The empirical models assume that these species are capable of moving into these more northerly areas, that is, if habitat is available and no major barriers exist. Such shifting of ranges and abundances could cause local extinctions in the more southern portions of the birds' ranges, and, if movement to the north is impossible, extinctions of entire species could occur. We must bear in mind, however, that this empiricalstatistical technique, which associates large-scale patterns of bird ranges with large-scale patterns of climate, does not explicitly represent the physical and biological mechanisms that could lead to changes in birds' ranges. Therefore, the detailed maps should be viewed only as illustrative of the potential for very significant shifts with different possible doubled CO2 climate change scenarios.
(a) Map of current range and abundance of the Bobolink as determined from actual observations during the U.S. Geological Survey Breeding Bird Survey and (b) map of current range and abundance of the bobolink as estimated from the empiricalstatistical model. The high correspondence in patterns between maps a and b suggests that this model reliably captures many of the features of the actual observed range and abundance of this species as depicted in map a. (c) Map of the forecasted range and abundance of the bobolink for climate change response of a model with doubled CO2. This map illustrates the potential for very significant shifts that doubled CO2 could cause (Price 1995).
Global warming - birds nesting earlier
Digestive System: Food & Feeding Habits
Back to Bird Biogeography I
A.O.U. 1983. Check-list of North American birds, 6th ed. [with supplements through 1993]. American Ornithologists Union, Washington, DC.
Avise, J.C. 2000. Phylogeography: the history and formation of species. Harvard University Press, Cambridge, MA.
Avise, J.C. and D. Walker. 1998. Pleistocene phylogeographic effects on avian populations and the speciation process. Proc. Roy. Soc. Lond. B 265:457-463.
Baker, A.J. 1991. A review of New Zealand ornithology. Current Ornithology 8:1-67.
Brumfield, R. T., J. G. Tello, Z. A. Cheviron, M. D. Carling, and N. Crochet. 2007. Phylogenetic conservatism and antiquity of a tropical specialization: army-ant-following in the typical antbirds (Thamnophilidae). Molecular Phylogenetics and Evolution 45:1-13.
Cibois, A., B. Slikas, T. S. Schulenberg, and E. Pasquet. 2001. An endemic radiation of Malagasy songbirds is revealed by mitochondrial DNA sequence data. Evolution 55:1092-1206.
Cochran, W.W., H. Mouritsen, and M. Wikelski. 2004. Migrating songbirds recalibrate their magnetic compass daily from twilight cues. Science 304:405-408.
Cracraft, J. 1974. Continental drift and vertebrate distribution. Ann. Rev. Ecol. Syst. 5:215-261.
Darlington, P.J. 1957. Zoogeography: the geographical distribution of animals. J. Wiley and Sons, New York, NY.
Davies, R. G., C. D. L. Orme, D. Storch, V. A. Olson, G. H. Thomas, S. G. Ross, T.-S. Ding, P. C. Rasmussen, P. M. Bennett, I. P.F. Owens, T. M. Blackburn, and K. J. Gaston. 2007. Topography, energy and the global distribution of bird species richness. Proceedings of the Royal Society B 274: online early.
Gaston, K.J., T. M. Blackburn, and K. K. Goldewijk. 2003. Habitat conversion and global avian biodiversity loss. Proceedings of the Royal Academy of London B 270: 1293-1300.
Gill, F.B. 1995. Ornithology, second ed. W.H. Freeman and Co., New York, NY.
Hawkins, B. A., E. E. Porter, and J. A. F. Diniz-Filho. 2003. Productivity and history as predictors of the latitudinal diversity gradient of terrestrial birds. Ecology 84: 1608-1623.
Helm. B. and E. Gwinner. 2006. Migratory restlessness in an equatorial nonmigratory bird. PLoS Biol 4: e110.
Hitch, A. T. and P. L Leberg. 2007. Breeding distributions of North American bird species moving north as a result of climate change. Conservation Biology 21: 534-539.
Houde, P. and S.L. Olson. 1981. Paleognathous carinate birds from the early Tertiary of North America. Science 214:1236-1237.
Karr, J.R. 1990. Birds of tropical rainforest: comparative biogeography and ecology. Pp. 215-228 in Biogeography and ecology of forest bird communities (A. Keast, ed.). SPB Academic Publ., The Hague, Netherlands.
Klaassen, M. 1996. Metabolic constraints on long-distance migration in birds. Journal of Experimental Biology 199: 57-64.
Klicka, J. and R.M. Zink. 1997. The importance of recent Ice Ages in speciation: a failed paradigm. Science 277:1666-1669.
Lever, C. 1987. Naturalized birds of the world. Longman Scientific & Technical, Essex, England.
Levey, D.J. and F.G. Stiles. 1992. Evolutionary precursors of long distance migration: resource availability and movement patterns in Neotropical landbirds. American Naturalist 140:447-476.
Lincoln, F. C., S. R. Peterson, and J. L. Zimmerman. 1998. Migration of birds. U.S. Department of the Interior, U.S. Fish and Wildlife Service, Washington, D.C. Circular 16. Jamestown, ND: Northern Prairie Wildlife Research Center Home Page. http://www.npwrc.usgs.gov/resource/othrdata/migratio/migratio.htm (Version 02APR2002).
Lindström, K.M., J. Foufopoulos, H. Pärn, & M. Wikelski. 2004. Immunological investments reflect parasite abundance in island populations of Darwin's finches. Proc. Roy. Soc. Lond. B 271:1513-1519.
Line, L. 2003. Silent spring: a sequel? National Wildlife, vol. 41.
Lovei, G.L. 1989. Passerine migration between the Palearctic and Africa. Current Ornithology 6:143-174.
MacArthur, R.H., H. Recher, & M.L. Cody. 1966. On the relation between habitat selection and species diversity. Am. Nat. 100:319-332.
MacArthur, R.H. and E.O. Wilson. 1967. The theory of island biogeography. Princeton Univ. Press, Princeton, NJ.
Mayr, E. 1946. History of the North American bird fauna. Wilson Bulletin 58:3-41.
Mayr, E. 1964. Inference concerning the Tertiary American bird faunas. Proc. Natl. Acad. Sci. 51:280-288.
Mengel, R.N. 1964. The probable history of species formation in some northern wood warblers (Parulidae). Living Bird 3:9-43.
Moreau, R.E. 1952. Africa since the Mesozoic: with particular reference to certain biological problems. Proc. Zool. Soc. London 121:869-913.
Olson, S.L. 1985. The fossil record of birds. In D.S. Farner, J.R. King, and K.C. Parkes (eds.), Avian Biology, Vol. 8, pp. 79-238. Academic Press, New York.
Piersma, T., L. Zwarts, and J. H. Bruggeman. 1990. Behavioural aspects of the departure of waders before long-distance flights: flocking, vocalizations, flight paths and diurnal timing. Ardea 78: 157-184.
Popa-Lisseanu, A. G., A. Delgado-Huertas, M. G. Forero, A. Rodríguez, R. Arlettaz, and C. Ibáñezet. 2007. Bats' conquest of a formidable foraging niche: the myriads of nocturnally migrating songbirds. PLoS ONE 2: e205.
Porter, W. F.. 1994. Family Meleagrididae (Turkeys) in del Hoyo, J., Elliott, A., & Sargatal, J., eds. Handbook of the Birds of the World, Vol. 2. Lynx Edicions, Barcelona.
Price, J. 1995. Potential impacts of global climate change on the summer distribution of some North American grasslands birds. Ph.D. dissertation, Wayne State University, Detroit, MI.
Proctor, N.S. and P.J. Lynch. 1993. Manual of ornithology: avian structure and function. Yale Univ. Press, New Haven, CN.
Rabenold, K.N. 1993. Latitudinal gradients in avian species diversity and the role of long-distance migration. Current Ornithology 10:247-274.
Rahbek, C. and G. R. Graves. 2001. Multiscale assessment of patterns of avian species richness. Proceedings of the National Academy of Sciences 98:4534-4539.
Remsen, J. V., Jr. and T. A. Parker III. 1984. Arboreal dead-leaf-searching birds of the Neotropics. Condor 86:36-41.
Ribas, C. C., A. Aleixo, A. C. R. Nogueira, C. Y. Minaki, and J. Cracraft. 2011. A palaeobiogeographic model for biotic diversification within Amazonia over the past three million years. Proceedings of the Royal Society B, early online.
Sekercioglu, C. H., G. C. Daily, and P. R. Ehrlich. 2004. Ecosystem consequences of bird declines. Proc. Natl. Acad. Sci. 101: 18042-18047.
Selander, R.K. 1971. Systematics and speciation in birds. Pp. 57-147 in Avian Biology, vol. 1 (D.S. Farner and J.R. King, eds.). Academic Press, New York, NY.
Sibley, C.G. and J.E. Ahlquist. 1985. The phylogeny and classification of the Australo-Papuan passerine birds. Emu 85:1-14.
Sol, D., R. P. Duncan, T. M. Blackburn, P. Cassey, and L. Lefebvre. 2005. Big brains, enhanced cognition, and response of birds to novel environments. Proceedings of the National Academy of Science 102:
Weber, J.-M. 2009. The physiology of long-distance migration: extending the limits of endurance metabolism. Journal of Experimental Biology 212: 593-597.
Welty, J.C. and L. Baptista. 1988. The life of birds, 4th ed. Saunders College Publishing, New York, NY.
Wiens, J.A. 1991. Distribution. Pp. 156-174 in The Cambridge Encylopedia of Ornithology (M. Brooke and T. Birkhead, eds.). Cambridge Univ. Press, New York, NY.
Wiens, J. J. and M. J. Donoghue. 2004. Historical biogeography, ecology and species richness. Trends in Ecology and Evolution 19:639-644.
Wiles, G. J., J. Bart, R. E. Beck, and C. F. Aguon. 2003. Impacts of the Brown Tree Snake: patterns of decline and species persistence in Guam's avifauna. Conservation Biology 17: 1350-1360.
Willson, M.F. 1976. The breeding distribution of North American migrant birds: a critique of MacArthur (1959). Wilson Bulletin 88:582-587.
Atlas of the Ice Age Earth
Birds sing rainforest history
Geography and Ecology of Species Distributions
Geology: Plate Tectonics
Glaciers may not have driven modern bird evolution
and the Sea
to BIO 554/754 Syllabus | http://people.eku.edu/ritchisong/birdbiogeography2.html | 13 |
79 | The Pythagorean Theorem
The Pythagorean Theorem shows the relationship between the sides (a and b) and the hypotenuse (c) of a right triangle. The right triangle I will be using is shown below.
The Pythagorean Theorem states that, in a right triangle,the square of a (a²) plus the square of b (b²) is equal to the square of c (c²).
Summary: The Pythagorean Theorem is a²+b²=c², or leg² + leg² = hyp². It works only for right triangles.
Proof of the Pythagorean Theorem
Now that we know the Pythagorean Theorem, take a look at the following diagram.
Look at the large square. The large square's area can be written as:
since each side's length is (a+b). Look at the tilted square in the middle. Its area can be written as:
Now, look at each of the triangles at the corners of the large square. Each triangle's area is:
There are four triangles, so the area of all four of them combined is:
The area of the large square is equal to the area of the four triangles plus the area of the tilted square. This can be written as:
Using Algebra, this can be simplified.
(a+b)²=c²+4(½ab) (a+b)(a+b)=c²+2ab a2+2ab++b2=c²+2ab
Now we can see why the Pythagorean Theorem works, or, in other words, we can see proof of the Pythagorean Theorem.
However, this proof is not based on Euclidean Geometry. It is not elementary.
There are thousands more proofs of the Pythagorean theorem, too.
- You should be able to explain why the following is proof of the Pythagorean Theorem:
Summary: The Pythagorean Theorem can be proved using diagrams.
- Geometry Main Page
- Geometry/Chapter 1 Definitions and Reasoning (Introduction)
- Geometry/Chapter 2 Proofs
- Geometry/Chapter 3 Logical Arguments
- Geometry/Chapter 4 Congruence and Similarity
- Geometry/Chapter 5 Triangle: Congruence and Similiarity
- Geometry/Chapter 6 Triangle: Inequality Theorem
- Geometry/Chapter 7 Parallel Lines, Quadrilaterals, and Circles
- Geometry/Chapter 8 Perimeters, Areas, Volumes
- Geometry/Chapter 9 Prisms, Pyramids, Spheres
- Geometry/Chapter 10 Polygons
- Geometry/Chapter 11
- Geometry/Chapter 12 Angles: Interior and Exterior
- Geometry/Chapter 13 Angles: Complementary, Supplementary, Vertical
- Geometry/Chapter 14 Pythagorean Theorem: Proof
- Geometry/Chapter 15 Pythagorean Theorem: Distance and Triangles
- Geometry/Chapter 16 Constructions
- Geometry/Chapter 17 Coordinate Geometry
- Geometry/Chapter 18 Trigonometry
- Geometry/Chapter 19 Trigonometry: Solving Triangles
- Geometry/Chapter 20 Special Right Triangles
- Geometry/Chapter 21 Chords, Secants, Tangents, Inscribed Angles, Circumscribed Angles
- Geometry/Chapter 22 Rigid Motion
- Geometry/Appendix A Formulas
- Geometry/Appendix B Answers to problems
- Appendix C. Geometry/Postulates & Definitions
- Appendix D. Geometry/The SMSG Postulates for Euclidean Geometry | http://en.wikibooks.org/wiki/Geometry/Chapter_14 | 13 |
164 | In this chapter we will introduce the concepts of work and kinetic energy. These tools will significantly simplify the manner in which certain problems can be solved.
Figure 7.1. A force F acting on a body. The resulting displacement is indicated by the vector d.
Suppose a constant force F acts on a body while the object moves over a distance d. Both the force F and the displacement d are vectors who are not necessarily pointing in the same direction (see Figure 7.1). The work done by the force F on the object as it undergoes a displacement d is defined as
The work done by the force F is zero if:
* d = 0: displacement equal to zero
* [phi] = 90deg.: force perpendicular to displacement
Figure 7.2. Positive or Negative Work.
The work done by the force F can be positive or negative, depending on [phi]. For example, suppose we have an object moving with constant velocity. At time t = 0 s, a force F is applied. If F is the only force acting on the body, the object will either increase or decrease its speed depending on whether or not the velocity v and the force F are pointing in the same direction (see Figure 7.2). If (F * v) > 0, the speed of the object will increase and the work done by the force on the object is positive. If (F * v) < 0, the speed of the object will decrease and the work done by the force on the object is negative. If (F * v) = 0 we are dealing with centripetal motion and the speed of the object remains constant. Note that for the friction force (F * v) < 0 (always) and the speed of the object is always reduced !
Per definition, work is a scalar. The unit of work is the Joule (J). From the definition of the work it is clear that:
1 J = 1 N m = 1 kg m2/s2
Figure 7.3. Forces acting on the safe.
Sample Problem 7-2
A safe with mass m is pushed across a tiled floor with constant velocity for a distance d. The coefficient of friction between the bottom of the safe and the floor is uk. Identify all the forces acting on the safe and calculate the work done by each of them. What is the total work done ?
Figure 7.3 shows all the forces that act on the safe. Since the safe is moving with constant velocity, its acceleration is zero, and the net force acting on it is zero
The components of the net force along the x-axis and along the y-axis must therefore also be zero
The second equation shows that N = W = m g. The force that is applied to the safe can now be calculated
The work done on the safe by each of the four forces can now be calculated:
The total work done on the safe is therefore
which could be expected since the net force on the safe is zero.
Example Problem 1
A crate with mass m is pulled up a slope (angle of inclination is [theta]) with constant velocity. Calculate the amount of work done by the force after the crate has moved to a height h (see Figure 7.4).
Figure 7.4. Example Problem 1.
The coordinate system that will be used is shown in Figure 4. Since the crate is moving with a constant velocity, the net force in the x and y direction must be zero. The net force in the x direction is given by
and the force F required to move the crate with constant velocity is hereby fixed:
This force acts over a distance d. The value of d is fixed by the angle [theta] and the height h:
(see Figure 7.4). The work done by the force on the crate is given by
The work done on the crate by the gravitational force is given by
The work done on the crate by the normal force N is zero since N is perpendicular to d. We conclude that the total work done on the crate is given by
which was expected sine the net force on the crate is zero.
Figure 7.5. Crate moved in vertical direction.
If the same crate had been lifted by a height h in the vertical direction (see Figure 7.5), the force F required to produce a constant velocity would be equal to
F = m g
This force acts over a distance h, and the work done by this force on the object is
WF = m g h
which is equal to the work done by the force on the inclined slope. Although the work done by each force is the same, the strength of the required force is very different in each of the two cases.
Example Problem 2
A 3.57 kg block is drawn at constant speed 4.06 m along a horizontal floor by a rope exerting a 7.68 N force at angle of 15deg. above the horizontal. Compute (a) the work done by the rope on the block, and (b) the coefficient of kinetic friction between block and floor.
Figure 7.6. Example Problem 2.
A total of four forces act on the mass m: the gravitational force W, the normal force N, the friction force fk and the applied force F. These four forces are shown schematically in Figure 7.6. Since the velocity of the mass is constant, its acceleration is equal to zero. The x and y-components of the net force acting on the mass are given by
Since the net force acting on the mass must be zero, the last equation can be used to determine the normal force N:
The kinetic friction force fk is given by
However, since the net component of the force along the x-axis must also be zero, the kinetic friction force fk is also related to the applies force in the following manner
Combining these last two expressions we can determine the coefficient of kinetic friction:
The work done by the rope on the mass m can be calculated rather easily:
The work done by the friction force is given by
The work done by the normal force N and the weight W is zero since the force and displacement are perpendicular. The total work done on the mass is therefore given by
This is not unexpected since the net force acting on the mass is zero.
In the previous discussion we have assumed that the force acting on the object is constant (not dependent on position and/or time). However, in many cases this is not a correct assumption. By reducing the size of the displacement (for example by reducing the time interval) we can obtain an interval over which the force is almost constant. The work done over this small interval (dW) can be calculated
The total work done by the force F is the sum of all dW
Example: The Spring
An example of a varying force is the force exerted by a spring that is stretched or compressed. Suppose we define our coordinate system such that its origin coincides with the end point of a spring in its relaxed state (see Figure 7.7). The spring is stretched if x > 0 and compressed if x < 0. The force exerted by the spring will attempt to return the spring to its relaxed state:
if x < 0: F > 0
if x > 0: F < 0
It is found experimentally that for many springs the force is proportional to x:
F = - k x
Figure 7.7. Relaxed, Stretched and Compressed Springs.
where k is the spring constant (which is positive and independent of x). The SI units for the spring constant is N/m. The larger the spring constant, the stiffer the spring. The work done by the spring on an object attached to its end can be calculated if we know the initial position xi and final position xf of the object:
If the spring is initially in its relaxed state (xi = 0) we find that the work done by the spring is
Figure 7.8. Pendulum in x-y plane
Consider the pendulum shown in Figure 7.8. The pendulum is moved from position 1 to position 2 by a constant force F, pointing in the horizontal direction (see Figure 7.8). The mass of the pendulum is m. What is the work done by the sum of the applied force and the gravitational force to move the pendulum from position 1 to position 2 ?
Method 1 - Difficult
The vector sum of the applied force and the gravitational force is shown in Figure 7.9. The angle between the applied force F and the vector sum Ft is a. Figure 7.9 shows that the following equations relate F to Ft and Fg to Ft:
Figure 7.9. Vector sum Ft of Fg and F.
In order to calculate the work done by the total force on the pendulum, we need to know the angle between the total force and the direction of motion. Figure 7.10 shows that if the angle between the pendulum and the y-axis is [theta] , the angle between the total force and the direction of motion is [theta] + a. The distance dr is a function of d[theta]:
For a very small distance dr, the angle between dr and Ft will not change. The work done by Ft on the pendulum is given by
The total work done by Ft can be obtained by integrating the equation for dW over all angles between [theta] = 0deg. and [theta] = [theta]max. The maximum angle can be easily expressed in terms of r and h:
Figure 7.10. Angle between sum force and direction.
The total work done is
Using one of the trigonometric identities (Appendix, page A15) we can rewrite this expression as
Using the equations shown above for Ft cos(a), Ft sin(a), r cos([theta]max) and r sin([theta]max) we can rewrite this expression and obtain for W:
Method 2 - Easy
The total work done on the pendulum by the applied force F and the gravitational force Fg could have been obtained much easier if the following relation had been used:
The total work W is the sum of the work done by the applied force F and the work done by the gravitational force Fg. These two quantities can be calculated easily:
And the total work is
which is identical to the result obtained using method 1.
The observation that an object is moving with a certain velocity indicates that at some time in the past work must have been done on it. Suppose our object has mass m and is moving with velocity v. Its current velocity is the result of a force F. For a given force F we can obtain the acceleration of our object:
Assuming that the object was at rest at time t = 0 we can obtain the velocity at any later time:
Therefore the time at which the mass reaches a velocity v can be calculated:
If at that time the force is turned off, the mass will keep moving with a constant velocity equal to v. In order to calculate the work done by the force F on the mass, we need to know the total distance over which this force acted. This distance d can be found easily from the equations of motion:
The work done by the force F on the mass is given by
The work is independent of the strength of the force F and depends only on the mass of the object and its velocity. Since this work is related to the motion of the object, it is called its kinetic energy K:
If the kinetic energy of a particle changes from some initial value Ki to some final value Kf the amount of work done on the particle is given by
W = Kf - Ki
This indicates that the change in the kinetic energy of a particle is equal to the total work done on that particle by all the forces that act on it.
Consider a particle with mass m moving along the x-axis and acted on by a net force F(x) that points along the x-axis. The work done by the force F on the mass m as the particle moves from its initial position xi to its final position xf is
From the definition of a we can conclude
Substituting this expression into the integral we obtain
Example Problem 3
An object with mass m is at rest at time t = 0. It falls under the influence of gravity through a distance h (see Figure 7.11). What is its velocity at that point ?
Since the object is initially at rest, its initial kinetic energy is zero:
Ki = 0 J
The force acting on the object is the gravitational force
Fg = m g
Figure 7.11. Falling Object.
The work done by the gravitational force on the object is simply
W = Fg h = m g h
The kinetic energy of the object after falling a distance h can be calculated:
W = m . g . h = Kf - Ki = Kf
and its velocity at that point is
Figure 7.12. Projectile motion.
Example Problem 4
A baseball is thrown up in the air with an initial velocity v0 (see Figure 7.12). What is the highest point it reaches ?
The initial kinetic energy of the baseball is
At its highest point the velocity of the baseball is zero, and therefore its kinetic energy is equal to zero. The work done on the baseball by the gravitational force can be obtained:
W = Kf - Ki = - Ki
In this case the direction of the displacement of the ball is opposite to the direction of the gravitational force. Suppose the baseball reaches a height h. At that point the work done on the baseball is
W = - m g h
The maximum height h can now be calculated:
In every day life, the amount of work an apparatus can do is not always important. In general it is more important to know the time within which a certain amount of work can be done. For example: the explosive effect of dynamite is based on its capability to release large amounts of energy in a very short time. The same amount of work could have been done using a small space heater (and having it run for a long time) but the space heater would cause no explosion. The quantity of interest is power. The power tells us something about the rate of doing work. If an amount of work W is carried out in a time interval [Delta]t, the average power for this time-interval is
The instantaneous power can be written as
The SI unit of power is J/s or W (Watt). For example, our usage of electricity is always expressed in units of kilowatt . hour. This is equivalent to
7.3.1. kW . h = (103 W) (3600 s) = 3.6 x 106 J = 3.6 MJ
We can also express the power delivered to a body in terms of the force that acts on the body and its velocity. Thus for a particle moving in one dimension we obtain
In the more general case of motion in 3 dimensions the power P can be expressed as | http://teacher.pas.rochester.edu/phy121/LectureNotes/Chapter07/Chapter7.html | 13 |
107 | Topics covered: Differentials, antiderivatives
Instructor: Prof. David Jerison
Lecture Notes (PDF)
The following content is provided under a Creative Commons license. your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Today we're moving on from theoretical things from the mean value theorem to the introduction to what's going to occupy us for the whole rest of the course, which is integration. So, in order to introduce that subject, I need to introduce for you a new notation, which is called differentials. I'm going to tell you what a differential is, and we'll get used to using it over time. If you have a function which is y = f ( x), then the differential of y is going to be denoted dy, and it's by definition f' ( x ) dx. So here's the notation. And because y is really equal to f, sometimes we also call it the differential of f. It's also called the differential of f. That's the notation, and it's the same thing as what happens if you formally just take this dx, act like it's a number and divide it into dy. So it means the same thing as this statement here. And this is more or less the Leibniz, not Leibniz, interpretation of derivatives. Of a derivative as a ratio of these so called differentials. It's a ratio of what are known as infinitesimals.
Now, this is kind of a vague notion, this little bit here being an infinitesimal. It's sort of like an infinitely small quantity. And Leibniz perfected the idea of dealing with these intuitively. And subsequently, mathematicians use them all the time. They're way more effective than the notation that Newton used. You might think that notations are a small matter, but they allow you to think much faster, sometimes. When you have the right names and the right symbols for everything. And in this case it made it very big difference. Leibniz's notation was adopted on the Continent and Newton dominated in Britain and, as a result, the British fell behind by one or two hundred years in the development of calculus. It was really a serious matter. So it's really well worth your while to get used to this idea of ratios. And it comes up all over the place, both in this class and also in multivariable calculus. It's used in many contexts.
So first of all, just to go a little bit easy. We'll illustrate it by its use in linear approximations, which we've already done. The picture here, which we've drawn a number of times, is that you have some function. And here's a value of the function. And it's coming up like that. So here's our function. And we go forward a little increment to a place which is dx further along. The idea of this notation is that dx is going to replace the symbol delta x, which is the change in x. And we won't think too hard about - well, this is a small quantity, this is a small quantity, we're not going to think too hard about what that means. Now, similarly, if you see how much we've gone up - well, this is kind of low, so it's a small bit here.
So this distance here is, previously we called it delta y. But now we're just going to call it dy. So dy replaces delta y. So this is the change in level of the function. And we'll represent it symbolically this way. Very frequently, this just saves a little bit of notation. For the purposes of this, we'll be doing the same things we did with delta x and delta y, but this is the way that Leibniz thought of it. And he would just have drawn it with this. So this distance here is dx and this distance here is dy. So for an example of linear approximation, we'll say what's 64.1, say, to the 1/3 power, approximately equal to? Now, I'm going to carry this out in this new notation here. The function involved is x ^ 1/3. And then it's a differential, dy. Now, I want to use this rule to get used to it. Because this is what we're going to be doing all of today is, we're differentiatating, or taking the differential of y. So that is going to be just the derivative. That's 1/3 x ^ - 2/3 dx. And now I'm just going to fill in exactly what this is. At x = 64, which is the natural place close by where it's easy to do the evaluations, we have y = 64 ^ 1/3, which is just 4.
And how about dy? Well, so this is a little bit more complicated. Put it over here. So dy = 1/3 ( 64) ^ - 2/3 dx. And that is (1/3 ) 1/16 dx, which is 1/48 dx. And now I'm going to work out what 64 to the, whatever it is here, this strange fraction. I just want to be very careful to explain to you one more thing. Which is that we're using x = 64, and so we're thinking of x + dx is going to be 64.1. So that means that dx is going to be 1/10. So that's the increment that we're interested in. And now I can carry out the approximation. The approximation says that 64.1 ^ 1/3 is, well, it's approximately what I'm going to call y + dy. Because really, the dy that I'm determining here is determined by this linear relation. dy = 1/48 dx. And so this is only approximately true. Because what's really true is that this = y + delta y. In our previous notation. So this is in disguise. What this is equal to. And that's the only approximately equal to what the linear approximation would give you. So, really, even though I wrote dy is this increment here, what it really is if dx is exactly that, is it's the amount it would go up if you went straight up the tangent line. So I'm not going to do that because that's not what people write. And that's not even what they think. They're really thinking of both dx and dy as being infinitesimally small. And here we're going to the finite level and doing it. So this is just something you have to live with, is a little ambiguity in this notation.
This is the approximation. And now I can just calculate these numbers here. y at this value is 4. And dy, as I said, is 1/48 dx. And that turns out to be 4 + 1/480, because dx is 1/10. So that's approximately 4.002. And that's our approximation. Now, let's just compare it to our previous notation. This will serve as a review of, if you like, of linear approximation. But what I want to emphasize is that these things are supposed to be the same. Just that it's really the same thing. It's just a different notation for the same thing. I remind you the basic formula for linear approximation is that f ( x ) is approximately f ( a) + f' ( a )( x - a). And we're applying it in the situation that a = 64 and f(x) = x ^ 1/3. And so f ( a ), which is f ( 64 ) is of course 4. And f' ( a ), which is a, (1/3)a ^ - 2/3, is in our case 1/16. No, 1/48. OK, that's the same calculation as before. And then our relationship becomes x ^ 1/3 is approximately equal to 4 + 1/48 ( x - a ), which is 64. So look, every single number that I've written over here has a corresponding number for this other method. And now if I plug in the value we happen to want, which is the 64.1, this would be 4 + 1/48 ( 1/10 ), which is just the same thing we had before. So again, same answer. Same method, new notation.
Well, now I get to use this notation in a novel way. So again, here's the notation. This notation of differential. The way I'm going to use it is in discussing something called antiderivative Again, this is a new notation now. But it's also a new idea. It's one that we haven't discussed yet. Namely, the notation that I want to describe here is what's called the integral of g ( x ) dx. And I'll denote that by a function capital G (x). So it's, you start with a function g ( x ) and you produce a function capital G ( x ), which is called the antiderivative of G. Notice there's a differential sitting in here. This symbol, this guy here, is called an integral sign. Or an integral, or this whole thing is called an integral. And another name for the antiderivative of g is the indefinite integral of g. And I'll explain to you why it's indefinite in just, very shortly here.
Well, so let's carry out some examples. Basically what I'd like to do is as many examples along the lines of all the derivatives that we derived at the beginning of the course. In other words, in principle you want to be able to integrate as many things as possible. We're going to start out with the integral of sine x dx. That's a function whose derivative is sine x. So what function would that be? Cosine x, minus, right. It's - cos x. So - cos x differentiated gives you sine x. So that is an antiderivative of sine. And it satisfies this property. So this function, capital G ( x ) = - cos x, has the property that its derivative is sine x. On the other hand, if you differentiate a constant, you get 0. So this answer is what's called indefinite. Because you can also add any constant here. And the same thing will be true. So, c is constant. And as I said, the integral is called indefinite. So that's an explanation for this modifier in front of the integral. It's indefinite because we actually didn't specify a single function. We don't get a single answer. Whenever you take the antiderivative of something it's ambiguous up to a constant.
Next, let's do some other standard functions from our repertoire. We have an integral of (x ^ a)dx. Some power, the integral of a power. And if you think about it, what you should be differentiating is one power larger than that. But then you have to divide by 1 / a + 1, in order that the differentiatiation be correct. So this just is the fact that d / dx of x ^ a + 1, or maybe I should even say it this way. Maybe I'll do it in differential notation. d ( x ^ a + 1) = (a + 1) (x ^ a) dx. So if I divide that through by a + 1, then I get the relation above. And because this is ambiguous up to a constant, it could be any additional constant added to that function.
Now, the identity that I wrote down below is correct. But this one is not always correct What's the exception? Yeah. a equals
PROFESSOR: Negative 1. So this one is OK for all a. But this one fails because we've divided by when a = - 1. So this is only true when a is not equal to - 1. And in fact, of course, what's happening when a = 0, you're getting when you differentiate the constant. So there's a third case that we have to carry out. Which is the exceptional case, namely the integral of dx/x. And this time, if we just think back to what are - so what we're doing is thinking backwards here, which a very important thing to do in math at all stages. We got all of our formulas, now we're reading them backwards. And so this one, you may remember, is ln x.
The reason why I want to do this carefully and slowly now, is right now I also want to write the more standard form which is presented. So first of all, first we have to add a constant. And please don't put the parentheses here. The parentheses go there. But there's another formula hiding in the woodwork here behind this one. Which is that you can also get the correct formula when x is negative. And that turns out to be this one here. So I'm treating the case, x positive, as being something that you know. But let's check the case, x negative. In order to check the case x negative, I have to differentiate the logarithm of the absolute value of x in that case. And that's the same thing, again, for x negative as the derivative of the lograrithm of negative x. That's the formula, when x is negative. And if you carry that out, what you get, maybe I'll put this over here, is, well, it's the chain rule. It's (1 / -x) d/dx(-x).
So see that there are two minus signs. There's a - x in the denominator and then there's the derivative of - x in the numerator. That's just - 1. This part is - 1. So this negative 1 over negative x, which is 1 / x. So the negative signs cancel. If you just keep track of this in terms of ln negative x and its graph, that's a function that looks like this. For x negative. And its derivative is 1 / x, I claim. And if you just look at it a little bit carefully, you see that the slope is always negative. Right? So here the slope is negative. So it's going to be below the axis. And, in fact, it's getting steeper and steeper negative as we go down. And it's getting less and less negative as we go horizontally. So it's going like this, which is indeed the graph of this function, for x negative. Again, x negative.
So that's one other standard formula. And very quickly, very often, we won't put the absolute value signs. We'll only consider the case x positive here. But I just want you to have the tools to do it in case we want to use, we want to handle, both positive and negative x. Now, let's do two more examples. The integral of sec^2 x dx. These are supposed to get you to remember all of your differentiatation formulas, the standard ones. So this one, integral of sec^2 dx is what? Tan x. And here we have + c, alright? And then the last one of, a couple of, this type would be, let's see. I should do at least this one here, square root of 1 - x ^2. This is another notation, by the way, which is perfectly acceptable. Notice I've put the dx in the numerator and the function in the denominator here. So this one turns out to be sin-1 x. And, finally, let's see. About the integral of dx / 1 + x ^2. That one is tan -1 x.
For a little while, because you're reading these things backwards and forwards, you'll find this happens to you on exams. It gets slightly worse for a little while. You will antidifferentiate when you meant to differentiate. And you'll differentiate when you're meant to antidifferentiate. Don't get too frustrated by that. But eventually, you'll get them squared away. And it actually helps to do a lot of practice with antidifferentiations, or integrations, as they're sometimes called. Because that will solidify your remembering all of the differentiation formulas. So, last bit of information that I want to emphasize before we go on some more complicated examples is this It's obvious because the derivative of a constant is 0. That the antiderivative is ambiguous up to a constant. But it's very important to realize that this is the only ambiguity that there is.
So the last thing that I want to tell you about is uniqueness of antiderivatives up to a constant. The theorem is the following. The theorem is if F' = F', then F = G. So F ( x ) = G ( x) + c. But that means, not only that these are antiderivatives, all these things with these + c's are antiderivatives. But these are the only ones. Which is very reassuring. And that's a kind of uniqueness, although its uniqueness up to a constant, it's acceptable to us. Now, the proof of this is very quick. But this is a fundamental fact. The proof is the following. If F' = G', then if you take the difference between the two functions, its derivative, which of course is F' - G' = 0. Hence, F( x) - G (x) is a constant. Now, this is a key fact. Very important fact. We deduced it last time from the mean value theorem. It's not a small matter. It's a very, very important thing. It's the basis for calculus. It's the reason why calculus make sense. If we didn't have the fact that the derivative is implied that the function was constant, we would be done. We would have, calculus would be just useless for us. The point is, the rate of change is supposed to determine the function up to this starting value. So this conclusion is very important. And we already checked it last time, this conclusion. And now just by algebra, I can rearrange this to say that f ( x) = G ( x) + c.
Now, maybe I should leave differentials up here. Because I want to illustrate. So let's go on to some trickier, slightly trickier, integrals. Here's an example. The integral of, say, x^3 ( x ^ 4 + 2) ^ 5 dx. This is a function which you actually do know how to integrate, because we already have a formula for all powers. Namely, the integral of x ^ a is equal to this. And even if it were a negative power, we could do it. So it's OK. On the other hand, to expand the 5th power here is quite a mess. And this is just a very, very bad idea. There's another trick for doing this that evaluates this much more efficiently. And it's the only device that we're going to learn now for integrating. Integration actually is much harder than differentiation. Symbolically. It's quite difficult. And occasionally impossible. And so we have to go about it gently. But for the purposes of this unit, we're only going to use one method. Which is very good. That means whenever you see an integral, either you'll be able to divine immediately what the answer is, or you'll use this method. So this is it. The trick is called the method of substitution. And it is tailor-made for notion of differentials. So tailor-made. for differential notation.
The idea is the following. I'm going to to define a new function. And it's the messiest function that I see here. It's u = x ^ 4 + 2. And then, I'm going to take its differential and what I discover, if I look at its formula, is and the rule for differentials, which is right here. Its formula is what? 4x^3 dx. Now, lo and behold with these two quantities, I can substitute, I can plug in to this integral. And I will simplify it considerably. So how does that happen? Well, this integral is the same thing as, well, really I should combine it the other way. So let me move this over. So there are two pieces here. And this one is u ^ 5. And this one is 1/4 du. Now, that makes it the integral of (u ^ 5 du) / 4. And that's relatively easy to integrate. That is just a power. So let's see. It's just 1/20 u to the - not 1/20. The antiderivative of u ^ 5 is u ^ 6. With the 1/6, so it's 1/24 u ^ 6 + c. Now, that's not the answer to the question. It's almost the answer to the question. Why isn't it the answer? It isn't the answer because now the answer's expressed in terms of u. Whereas the problem was posed in terms of this variable x. So we must change back to our variable here. And we do that just by writing it in. So it's 1/24 (x ^ 4 + 2) ^ 6 + c. And this is the end of the problem. Yeah, question.
PROFESSOR: The question is, can you see it directly? Yeah. And we're going to talk about that in just one second. OK.
Now, I'm going to do one more example and illustrate this method. Here's another example. The integral of x dx / squre root of 1 + x ^2. Now, here's another example. Now, the method of substitution leads us to the idea u = 1 + x ^2. du = 2x dx, etc. It takes about as long as this other problem did. To figure out what's going on. It's a very similar sort of thing. You end up integrating u ^ - 1/2. It needs to the integral of u ^ - 1/2 du. Is everybody seeing where this..? However, there is a slightly better method. So recommended method. And I call this method advanced guessing. What advanced guessing means is that you've done enough of these problems that you can see two steps ahead. And you know what's going to happen. So the advanced guessing leads you to believe that here you had a power - 1/2, here you have the differential of the thing. So it's going to work out somehow. And the advanced guessing allows you to guess that the answer should be something like this. (1 + x ^2) ^ 1/2. So this is your advanced guess. And now you just differentiate it, and see whether it works. Well, here it is. It's 1/2 (1 + x ^2) ^ - 1/2( 2x), that's the chain rule here. Which, sure enough, gives you x / square root of 1 + x ^2. So we're done. And so the answer is square root of (1 + x^2) + c.
Let me illustrate this further with another example. I strongly recommend that you do this, but you have to get used to it. So here's another example. e ^ 6x dx. My advanced guess is e ^ 6x. And if I check, when I differentiate it, I get 6e ^ 6x. That's the derivative. And so I know that the answer, so now I know what the answer is. It's 1/6 e ^ 6x + c. Now, OK, you could, it's also OK, but slow, to use a substitution, to use u = 6x. Then you're going to get du = 6dx ... It's going to work, it's just a waste of time.
Well, I'm going to give you a couple more examples. So how about this one. x ( e^ - x^2) dx. What's the guess? Anybody have a guess? Well, you could also correct. So I don't want you to bother - yeah, go ahead.
PROFESSOR: Yeah, so you're already one step ahead of me. Because this is too easy. When they get more complicated, you just want to make this guess here. So various people have said 1/2, and they understand that there's 1/2 going here. But let me just show you what happens, OK? If you make this guess and you differentiate it, what you get here is e^ - x ^2 times the derivative of negative 2x, so that's - 2x. - x^2, so it's - 2x. So now you see that you're off by a factor of not 2, but - 2. So a number of you were saying that. So the answer is - 1/2 e^ - x ^2 + c. And I can guarantee you, having watched this on various problems, that people who don't write this out make arithmetic mistakes. In other words, there is a limit to how much people can think ahead and guess correctly. Another way of doing it, by the way, is simply to write this thing in and then fix the coefficient by doing the differentiation here. That's perfectly OK as well.
Alright, one more example. We're going to integrate sin x cos x dx. So what's a good guess for this one?
PROFESSOR: Someone suggesting sine ^2 x. So let's try that. Over 2 - well, we'll get the coefficient in just a second. So sine ^2 x, if I differentiate I get 2 sine x cosine x. So that's off by a factor of 2. So the answer is 1/2 sine ^2 x. But now I want to point out to you that there's another way of doing this problem. It's also true that if you differentiate cosine ^2 x, you get 2 cos x ( - sine x). So another answer is that the integral of sin x cos x dx = - 1/2 cos^2 x + c. So what is going on here? What's the problem with this?
PROFESSOR: Pardon me?
PROFESSOR: Integrals aren't unique. That's part of the - but somehow these two answers still have to be the same.
PROFESSOR: OK. What do you think?
STUDENT: If you add them together, you just get c.
PROFESSOR: If you add them together you get c. Well, actually, that's almost right. That's not what you want to do, though. You don't want to add them. You want to subtract them. So let's see what happens when you subtract them. I'm going to ignore the c, for the time being. I get sin^2 x, 1/2 sin^2 x - (-1/2 cos^2 x). So the difference between them, we hope to be 0. But actually of course it's not 0. What it is, is it's 1/2 (sin^2 + cos^2) which is 1/2. It's not 0, it's a constant. So what's really going on here is that these two formulas are the same. But you have to understand how to interpret them. The two constants, here's a constant up here. There's a constant, c1 associated to this one. There's a different constant, c2 associated to this one. And this family of functions for all possible c1s and all possible c2s, is the same family of functions. Now, what's the relationship between c1 and c2? Well, if you do the subtraction, c1 - c2 has to be equal to 1/2. They're both constants, but they differ by 1/2. So this explains, when you're dealing with families of things, they don't have to look the same. And there are lots of trig functions which look a little different. So there can be several formulas that actually are the same. And it's hard to check that they're actually the same. You need some trig identities to do it.
Let's do one more example here. Here's another one. Now, you may be thinking, and a lot of people are, thinking ugh, it's got a ln in it. If you're experienced, you actually can read off the answer just the way there were several people who were shouting out the answers when we were doing the rest of these problems. But, you do need to relax. Because in this case, now this is definitely not true in general when we do integrals. But, for now, when we do integrals, they'll all be manageable. And there's only one method. Which is substitution. And in the substitution method, you want to go for the trickiest part. And substitute for that. So the substitution that I proposed to you is that this should be, you should be ln x. And the advantage that that has is that its differential is simpler then itself. So du = dx /x. Remember, we use that in logarithmic differentiation, too.
So now we can express this using this substitution. And what we get is, the integral of, so I'll divide the two parts here. It's 1 / ln x, and then it's dx / x. And this part is 1 / u, and this part is du. So it's the integral of du / u. And that is ln u + c. Which altogether, if I put back in what u is, is ln (ln x) + c. And now we see some uglier things. In fact, technically speaking, we could take the absolute value here. And then this would be absolute values there. So this is the type of example where I really would recommend that you actually use the substitution, at least for now. Alright, tomorrow we're going to be doing differential equations. And we're going to review for the test. I'm going to give you a handout telling you just exactly what's going to be on the test. So, see you tomorrow. | http://ocw.mit.edu/courses/mathematics/18-01-single-variable-calculus-fall-2006/video-lectures/lecture-15-antiderivatives/ | 13 |
59 | How to Calculate Centripetal Acceleration of an Orbiting Object
In physics, you can apply Newton’s first and second laws to calculate the centripetal acceleration of an orbiting object. Newton’s first law says that when there are no net forces, an object in motion will continue to move uniformly in a straight line. For an object to move in a circle, a force has to cause the change in direction — this force is called the centripetal force. Centripetal force is always directed toward the center of the circle.
The centripetal acceleration is proportional to the centripetal force (obeying Newton’s second law). This is the component of the object’s acceleration in the radial direction (directed toward the center of the circle), and it’s the rate of change in the object’s velocity as the object moves in a circle; the centripetal force does not change the magnitude of the velocity, only the direction.
You can connect angular quantities, such as angular velocity, to centripetal acceleration. Centripetal acceleration is given by the following equation:
where v is the velocity and r is the radius. Linear velocity is easy enough to tie to angular velocity because
Therefore, you can rewrite the acceleration formula as
The centripetal-acceleration equation simplifies to
Nothing to it. The equation for centripetal acceleration means that you can find the centripetal acceleration needed to keep an object moving in a circle given the circle’s radius and the object’s angular velocity.
Say that you want to calculate the centripetal acceleration of the moon around the Earth. Start with the old equation
First you have to calculate the tangential velocity of the moon in its orbit. Alternatively, you can use the new version of the equation,
This is easier because the moon orbits the Earth in about 28 days, so you can easily calculate the moon’s angular velocity.
Because the moon makes a complete orbit around the Earth in about 28 days, it travels
around the Earth in that period, so its angular velocity is
Converting 28 days to seconds gives you the following:
Therefore, you get the following angular velocity:
You now have the moon’s angular velocity,
The average radius of the moon’s orbit is
so its centripetal acceleration is
In the preceding equation, the units of angular velocity, radians per second, are written as s–1 because the radian is a dimensionless unit. A radian is the angle swept by an arc that has a length equal to the radius of the circle. Think of it as a particular portion of the whole circle; as such, it has no dimensions. So when you have radians per second, you can omit radians, which leaves you with per second. Another way to write this is to use the exponent –1, so you can represent radians per second as s–1.
Just for kicks, you can also find the force needed to keep the moon going around in its orbit. Force equals mass times acceleration, so you multiply acceleration by the mass of the moon, | http://www.dummies.com/how-to/content/how-to-calculate-centripetal-acceleration-of-an-or.navId-407029.html | 13 |
54 | Once we understand the trigonometric functions sine, cosine, and tangent, we are ready to learn how to use inverse trigonometric functions to find the measure of the angle the function represents. Inverse trigonometric functions, found on any standard scientific or graphing calculator, are a vital part of trigonometry and will be encountered often in Calculus.
In right triangles when we're talking about cosine, sine and tangent sometimes you're going to need to use what's known an inverse trig function. Let's look at what that means, if I asked you to find the measure of angle b, so that's the angle right up here. Well what we're going to do is we're going to say are we going to use cosine, sine or tangent? Relative to b I know the opposite side and the adjacent, opposite and adjacent is tangent, so I can say that the tangent of b is equal to the ratio of 14 to 12 the opposite to the adjacent.
But now we have a problem, how do I find out what b is? I know eventually I want to see b equals. Right now we have tangent of b if I went back to Algebra I would say divide both sides by tangent because tangent appears to be multiplying b but that would be incorrect. Trigonometry doesn't work exactly like Algebra, the way that we isolate b in this problem is by taking the inverse trig function. So since we have tangent of b we're going to say that if I took the inverse tangent of 14 twelves that's what b would be. To use b as many times as I can in one sentence, so the inverse tangent is looks like tangent to the negative 1. So in your calculator right above your tangent button is the inverse tangent.
And you're probably going to have to put something like second to get there, so in my calculator I'm going to type inverse tangent of this fraction 14 to 12 and I get 49.4. So b equals 49.4 degrees. Let's look at 2 other quick examples, let's say I told you sine of x is equal to 0.5 what is x? To solve for x we're going take the inverse sign of both sides of this equation. So on the other side I'm going to write sine inverse of 0.5 so the inverse sine of the sine of x is x and that's the reason why we use that inverse property. So I'm going to type in inverse sine of 0.5 and I get 30 degrees.
I still got one last one, if I said cosine of y is equal to 18 divided by 22 to solve for y we're going to take the inverse cosine. So we're going to take the inverse cosine of the cosine of y which will give us y and on the right side I'm going to have to take the inverse cosine of 18 20 seconds. So I'm going to say inverse cosine of 18 divided by 22 is 35.1. So whenever you need to solve for an angle when you're talking about a trig function you can get at that variable by taking the inverse function of whatever you're talking about sine, cosine or tangent. | http://www.brightstorm.com/math/precalculus/basic-trigonometry/inverse-trig-functions-problem-2/ | 13 |
51 | TenMarks teaches you how to find the perimeter of a figure on a coordinate graph.
Read the full transcript »
How to Find Perimeter on a Coordinate Graph Perimeter on a Coordinate Graph In this problem, the first part we need to find the perimeter of the rectangle. Here we have our rectangle and to find the perimeter of a polygon. A rectangle is a polygon, so to find the perimeter of a polygon; we have to add the length of all sides. So, the rectangle has two equal horizontal and two equal vertical sides. Remember, the perimeter is the sum of lengths of all four sides. In finding the perimeter, the first step that we need to do is find the length of the horizontal. We’re going to look for the horizontal length. If you notice that this does not have it labeled, however, it’s on a coordinate plane, so we have our coordinates. When you have coordinates, coordinates are given in x, y. So, the x coordinate is the first number in each ordered pair. So, for the horizontal length, the x coordinates are 2 and 5. The length of the horizontal line is going to be the difference between the x coordinates, so it’s going to be 5 - 2. So, 5 - 2 is 3 units. The horizontal sides are three units each. This is 3 and this is 3, three units each. Our second step is we need to find the length of the vertical sides, so now we’re looking at the vertical sides. The y coordinate in a set of ordered pair is the second number. To find the length of the vertical line segment which is the width of the rectangle, we need to find the distance between the y coordinates. These are the y coordinates. Notice how it’s the same on both sides. So, the length of the vertical line is going to be the difference between the y coordinates, so it will be 3 - 1.and 3- 1 is 2 units. So, the vertical sides are two units each. So, it’s two units on this side and two units on the other side. Now, our third step in solving this is we need to add the lengths of the sides to find the perimeter. We’re going to find the perimeter by adding the sum of all four lengths. So, the perimeter is the sum of all four lengths. We’re going to go ahead and add them up. I’m going to add the horizontal lengths. I’m going to take 3 + 3 and then I’m going to add the vertical lengths, 2 + 2. When we add these up, we get 10. So, the perimeter of the rectangle is 10 units. Let’s move on to the second problem. Here we need to find the perimeter of this figure. The first step we need to do is we need to label all the coordinates marked in the figure. I’m going to go ahead to label all my parts, so I’m going to say that this A, B, C, D, E and F. I’m labeling my points and now, I’m going to mark all the coordinates. Coordinate A is going to be, remember it’s x, we do coordinates in x,y. So, A is 3 and 2, B is 3 and 8, C is 8 and 8, D is 8 and 5, E is 6 and 5 and then F is 6 and 2. So, they are the six coordinates of this figure. We need to find what the horizontal lines are. Our horizontal lines are going to be B and C is a horizontal line. We also have E and D are horizontal lines and we also have A and F as a horizontal line. Now, we need to find our vertical lines. Our vertical lines go up and down. So, we have A and B, we have C and D and we have E and F. Now, we have gotten all our information that we need in order to find the perimeter of this figure. Our first step that we’re going to do is we need to find the length of the horizontal sides of the figure. So, first thing let’s do is we need to find the length of B, C. So, the length of B, C, we need to look at the x coordinate and remember the x coordinate is the first number in each ordered pair and remember, the ordered pair is B, C. So, the x coordinate for B is 3 and then the x coordinate for C is 8. We’re going to take the difference. So B, C is going to equal 8 - 3 which would give us 5 units. Similarly, let’s find D and E. So, you can put it D, E, or E, D; it’s both the same. So D, E, we look at the x coordinates and we get 8 and 6. So, 8 - 6 would give us 2 units and then we have F and A. F and A is 6 and 3 which would give us three units. Now, we | http://www.healthline.com/hlvideo-5min/how-to-find-perimeter-on-a-coordinate-graph-285016605 | 13 |
138 | THE TWO COMPETING EXPLANATIONS FOUND IN K-6 BOOKS:
Here is the typical "Airfoil shape" or "Popular" explanation of
airfoil lift which commonly appears in childrens' science books:
As air approaches a wing, it is divided into two parts, the part which
flows above the wing, and the part which flows below. In order to create
a lifting force, the upper surface of the wing must be longer and more
curved than the lower surface. Because the air flowing above and below
the wing must recombine at the trailing edge of the wing, and because the
path along the upper surface is longer, the air on the upper surface must
flow faster than the air below if both parts are to reach the trailing
edge at the same time. The "Bernoulli Principle" says that the total
energy contained in each part of the air is constant, and when air gains
kinetic energy (speed) it must lose potential energy (pressure,) and so
high-speed air has a lower pressure than low-speed air. Therefore,
because the air flows faster on the top of the wing than below, the
is lower than the pressure below the wing, and the wing driven upwards by
the higher pressure below. In modern wings the low pressure above the
wing creates most of the lifting force, so it isn't far from wrong to say
that the wing is essentially 'sucked' upwards. (Note however that
"suction" doesn't exist, because air molecules can only push upon a
surface, and they never can pull.)
Uh oh, wind tunnel photographs of lift-generating wings reveal a
serious problem with the above description! They show that the
divided parcels DO NOT RECOMBINE AT THE TRAILING EDGE. Whenever
an airfoil is adjusted to give lift, then the parcels of air above
the wing move FAR faster than those below, and the lower parcels
far behind. After the wing has passed by, the parcels remain
forever divided. This has nothing to do with the wing's path lengths.
This even applies to thin flat wings such as a "flying
barn door." The wind
tunnel experiments show that the "wing-shape" argument regarding
difference in path-length is simply wrong.
Also, real-world aircraft demonstrate another fallacy. In order to
create lift, must a wing have greater path length on the
upper surface than on the lower? No. Thin cambered (curved) wings
such as those on hang gliders and on rubberband-powered balsa
gliders, have equal path length above and below, yet they generate
lift. Still the air does flow faster above these wings than below.
However, since there is no difference in path length, we cannot
refer to path length to explain the difference in air speed above
and below the thin wing. The typical "airfoil shape" explanation
cannot tell us why a paper airplane can fly, because it does not
tell us why the air above the paper wing moves faster.
It is also a fallacy that in order to create lift, a wing *must* be
more curved on top. In fact, wings which are designed for high
speed and aerobatics are symmetrical streamlined shapes, with equal
curvature above and below. Some exotic airfoil shapes are even
flat on top and more curved on the bottom! (NASA's "supercritical"
wing designs, for example.)
If the typical "popular" or "airfoil-shape" explanation is
correct, then how can symmetrical wings and thin cambered wings
work at all? How can rubberband balsa gliders work? Those who
support the "path length" explanation will sometimes suggest that
some other method must be used to explain these particular wings.
But if so, why then do so many books put forth only the above
"popular" explanation as the single explanation of aerodynamic
lift? Why do they avoid detailing or even mentioning any other
important explanations of lifting force?
The cloth aircraft of old had single-layer wings having
identical path length above and below. If the "Wing-shape" or
"popular" explanation is correct and path-length is very
important, how can the Wrights' flyer have worked at all?
Conversely, we do find that thin airfoils such as the Wrights'
have faster flow on the upper surface than the lower surface.
Since the path lengths are identical, how can we explain this?
The above "path length" viewpoint would predict that the
addition of a lump to the top of a wing should always increase the
lift (since it increases the upper surface path length.) In fact,
the addition of a lump does not increase lift. This suggests that
there is a problem with the "airfoil shape" explanation of lift.
Forces on sailboat sails are explained using the typical
"pathlength/wingshape" explanation above. But sailboat sails are
thin cloth membranes with identical path-lengths on either side.
Why should air on either side of a sail have different velocities
if the path length is the same?
Children have experience with rubberband-powered balsa wood
aircraft having wings composed of a single flat layer of very
thin wood. Paper airplanes usually have flat thin wings. These
aircraft cannot fly? How can the "path length" version explain
their successful operation?
Regardless of the angle of attack, if a wing does not deflect
air downwards, it creates no lift at all. To say otherwise would
go against the law of Conservation of Momentum. Yet those who
believe in the "airfoil-shape" explanation commonly state that
wings operate only by pressure, and Newton's laws are unimportant.
This is a direct violation of basic physics principles.
Bernoulli's equation incorporates basic physics, and anyone who
depart from Newton must automatically depart from Bernoulli as
well. Besides buoyancy and helium balloons, the only way to remain
aloft is to take some matter and accelerate it downwards. The
downward force applied to the matter is equal to the upward force
applied by the matter against the craft. Rockets work like this,
as do ship propellers, jet engines, helicopters, ...and wings!
Some people argue that the "path length" explanation must be
right, since some wings generate lift even at zero angle of
attack. However, Attack-angle is determined geometrically, by
drawing a line between the tip of the leading and trailing edge.
This geometrically-determined attack angle can be misleading:
Small bumps on the leading edge of a blunt-nosed wing have a large
effect on where the line is drawn. These bumps strongly affect
the determination of "attack angle, yet these bumps may have
little if any effect on the lifting forces being generated. Also,
inertial effects will cause a thin, curved airfoil to deflect air
downwards from its trailing edge more than it deflects air
upwards at its leading edge, and the unequal deflection generates
lift even at zero angle of attack. This type of wing may APPEAR
to have zero attack angle, but the inertia of air causes the air
to flow straight from the trailing edge of the airfoil. Because
of inertia, the trailing edge of a cambered airfoil itself behaves
as a tilted plane, and therefore the airfoil effectively has a
positive angle which causes air to be deflected. Other cambered
wings are similar; they still have a positive "effective" attack
angle even when their geometrical attack angle of zero.
Some people argue that flat wings, symmetrical aerobatic wings,
Supercritical wings, and thin cloth wings do not employ the
Bernoulli Effect, and these wings must instead be explained by
Newton and attack angle. But as I said before, if jet fighters
and the Wright Flyer use Attack Angle rather than Bernoulli
Effect, why do the books teach only Bernoulli Effect? At the very
least, these books are ignoring an entire class of aircraft by
never mentioning Attack Angle. However, even these thin wings and
symmetrical wings exhibit the full-blown Bernoulli principle!
There is a difference in speed between the upper and lower air
streams along flat wings. If a flat sheet of plywood is tilted
into the air stream, the air flows faster above the sheet than
below, and lift is generated by the pressure difference. But the
flat sheet also deflects the air, and just as much lift is
generated by deflection of air. In fact, 100% of aerodynamic lift
can be explained by the Bernoulli principle. And 100% of lift can
be explained by Newton's third law. They are two different ways
of explaining a single event. However, any appeals to differences
in path length are simply wrong, and any book which uses that
explanation is acting to spread science misconceptions.
An alternate explanation of lift: "ATTACK ANGLE"
As air flows over a wing, the flow adheres to the surfaces of the wing.
This is called the "Coanda effect." Because the wing is tilted, the air
is deflected downwards as it moves over the wing's surfaces. Air which
flows below the wing is pushed downwards by the wing surface, and because
the wing pushes down on the air, the air must push upwards on the wing,
creating a lifting force. Air which flows over the upper surface of the
wing is adhering to the surface also. The wing "pulls downwards" on the
air as it flows over the tilted wing, and so the air pulls upwards on the
wing, creating more lifting force. (Actually the air follows the wing
because of reduced pressure, the "pull" is not really an attraction.) The
lifting force is created by Newton's Third Law and by conservation of
momentum, as the flowing air which has mass is deflected downward as the
wing moves forward. Because of Coanda Effect, the upper surface of the
wing actually deflects more air than does the lower surface.
My notes on "attack angle":
If you understand the "attack angle" explanation, then the causes
of other aircraft phenomena such as wingtip vortex will suddenly
become clear. The air at the trailing edge of the wing is
streaming downwards into the surrounding still air. The edge
of this mass of air curls up as the air moves downwards, creating
the "wingtip vortex." A similar effect can be seen when a drop
of dye falls into clear water: the edge of the mass of dye curls
up as the dye forces itself downwards into the water, resulting
in a ring vortex which moves downwards.
There is one major error associated with the "attack angle"
explanation. This is the idea that only the LOWER surface of
the wing can generate a lifting force. Some people imagine that
air bounces off the bottom of the tilted wing, and they come to
the mistaken belief that this is the main source of the lifting
force. Even Newton himself apparantly made this mistake, and so
overestimated the necessary size of man-lifting craft. In reality,
air is deflected by both the upper and the lower surfaces of the
wing, with the major part being deflected by the upper surface.
Because a large, heavy aircraft must deflect an enormous amount of
air downwards, people standing under a low-flying aircraft are
subjected to a huge downblast of air. They are essentially feeling
a portion of the pressure which supports the plane.
The downwash can be useful: when a cropduster flies low over a
field, the spray is injected into the airflow coming
from the wings. Rather than trailing straight back behind the
craft, the spray is sent downwards by the downwash from the wings.
Also, during takeoff the downwash interacts with the ground and
causes lift to greatly increase. Pilots use this effect to gain a
large airspeed just after takeoff. Because of downwash "ground
effect," their engine needs to do much less work in keeping their
aircraft aloft, therefore the extra power available can be used to
speed up the plane.
To create adequate lift at extremely low speeds, an airfoil
must be operated at a large angle of attack, and this leads to
airflow detachment from wing's the upper surface (stall.) To
prevent this, the airfoil must be carefully shaped. A good low-
speed airfoil is much more curved on the top, since lift can be
created only if the wing surface carefully deflects air downwards
by adhesion. Thus one origin of the misconception involving "more
curved upper surface." The surface must be curved to prevent
stall, not to create lift. The situation with the lower surface is
different, since the lower surface can deflect the air by collision.
Even so, it makes sense to have the lower surface be somewhat
concave, so that the air is slowly deflected as it proceeds along,
and so the upwards pressure is distributed uniformly over the
Why does flowing air adhere to the upper surface of the wing? This
is called the Coanda effect. Apparently Dr. Bernoulli has a better
PR department than Dr. Coanda, (grin!) since everyone has heard of
Bernoulli, while Coanda is rarely mentioned in textbooks.
The only correct part of the "wingshape/pathlength" explanation of
lift is the description of the Bernoulli effect itself. But the
"Bernoulli Effect" can also be interpreted thus: because the
wing is tilted, it creates a pocket of reduced pressure behind its
upper surface. Air must rush into this pocket. And at the tilted
lower surface, air collides with the surface and creates a region
of increased pressure. Any air which approaches the high pressure
region is slowed down. Therefore, the pressure is the cause of
the air velocity, not vice-versa as in the "airfoil-shape"
explanation above. Also, it is wrong to imagine that the low
pressure above the wing is caused by the "Bernoulli effect" while
the high pressure below the wings is not. Both pressure
variations have similar origin, but opposite values.
The "airfoil shape" explanation could be very useful in
calculating the lifting force of an airfoil. Knowing the fluid
velocity at all points on the airfoil surface, the pressure may be
calculated via Bernoulli's equation at all points, and if the
pressure at each point is vector summed, the total lifting force
upon the wing will be obtained. The trick then is knowing how to
obtain the fluid velocities. Appeals to differences in pathlength
do not work, so other methods (circulation and Kutta condition)
must be used.
Parts of the Airfoil Misconception
Wings create lift because they are curved on top and flat on the
Part of the lifting force is due to Bernoulli effect, and part is due
to Newton's 2nd law. Incorrect.
To produce lift, the shape of the wing is critical. Incorrect.
The Bernoulli effect pertains to the shape of the wing, while
Newton's laws pertain to the angle of attack. Incorrect.
Air which is divided by the leading edge must recombine at the
trailing edge. Incorrect.
The upper surface of an airfoil must be longer than the lower surface. Incorrect.
The tilt of the wing produces part of the lift. The shape of the wing
produces the rest. Incorrect.
A wing is really just the lower half of a venturi tube. Incorrect.
The upper surface of a wing will deflect air, but the lower surface is
horizontal, so it has little effect. Incorrect.
Airfoils need not deflect any air; pressure differences
alone can produce lift. Incorrect.
Ship propellors, rudders, rowboat oars, and helicopter blades all deflect water or
air. But airplane wings are entirely different. NOPE.
The "Coanda effect" only applies to thin liquid jets, not to airfoils
and flow attachment. Incorrect.
An airfoil can create lift even at zero attack angle. Misleading.
Cambered airfoils create lift at zero AOA, which proves that the
"Newtonian" theory of lift is wrong. Incorrect.
The "Newtonian" theory of lift is wrong because downwash happens
far behind the wing where it can have no effect. Incorrect.
1. Wings create lift because they are curved on top and flat on the
Incorrect because only some wings look like that, while other wings are
symmetrical (they're the same on top and bottom,) while still others
are flat on top ...and curved on the bottom! And don't forget the
hang-gliders and the Wright Brothers' flyer, both of which used thin
2. Part of the lifting force is due to Bernoulli effect, and part is due
to Newton. INCORRECT
Incorrect because ALL wings, regardless of shape or degree of tilt,
must create ALL of their lift because of Newton. To say otherwise
would mean that a wing could violate Newton's Laws! Yet at the same
time, ALL wings create ALL of their lift because of the Bernoulli
Equation. This is true because all of the lifting force comes from
pressure differences on the wings' surfaces.
In fact, one hundred percent of the lifting force can be explained
by "Newton," by ignoring the pressure differences and instead
measuring the deflected air and calculating the change in momentum.
And of course 100% of the lifting force can also be explained by
"Bernoulli", by looking at air speeds and then calculating the air
pressure on every part of the wing surface. See the NASA site.
3. To produce lift, the shape of the wing is critical. INCORRECT.
Incorrect because aerodynamic scientists have found that there are
two critical features of all airfoils: the trailing edge of the wing
must be fairly sharp, and the trailing edge of the wing must be
angled downwards. This is discussed in advanced textbooks in the
chapters on circulatory flow, in the section on "Kutta Condition."
Wings are allowed to have all sorts of crazy airfoil, but if they
don't have a downwards-tilted trailing edge which is sharp, they
won't produce much lift.
Other featues of the wing are important but not critical. For
example, in order to prevent "stall" the leading edge of the wing
must be fairly bulbous and the wing's upper surface must lack
sharp curves as well as being fairly smooth (no bumpy screws or
rivets allowed.) If the wing's leading edge is too sharp, or if
its upper surface is made wrong, then the flow of air above the wing
will break loose or "detach," and it will no longer be guided
downwards by the upper surface. This problem is called a "stall,"
and during a stall the amount of lifting force contributed by the
upper wing surface becomes very small.
4. The Bernoulli effect pertains to the shape of the wing, while
Newton's laws pertain to the angle of attack. INCORRECT.
Incorrect because Newton's laws pertain to all features of the wing;
both to wing shape and attack angle. Exactly the same thing is true
of Bernoulli's equation. Wings don't violate Newton's laws, and
wings in conventional flight (slower than the speed of sound) don't
violate Bernoulli's equation. See #2 above.
Incorrect because aerodynamic scientists have found that there are
There are even some wings which
3. Flat thin wings generate lift entirely because of Newton; because they
are tilted, while thick curved wings generate lift exclusively because
of "Bernoulli Effect?" INCORRECT.
Think a moment: if a wing
when a flat thin wing is given a positive angle of attack,
the air above the wing speeds up, and the air below the wing slows
down. 100 percent of the lifting force can be explained using
either the "Bernoulli effect" or the Newton/Coanda principles.
These two simply are a pair of alternate viewpoints on the same
situation, and it's wrong to try to break the lifting force
into a separate percentage of "Bernoulli" force and an "attack angle"
- In order to generate lift, the upper surface of an airfoil must be more
strongly curved than the lower surface? INCORRECT
Incorrect, since lift can be generated by symmetrical airfoil such as
those used on acrobatic aircraft. Lift can also be generated by
thin fabric airfoils, by sheets of paper (paper airplanes), by tilted
pieces of flat plywood, or by "supercritical" airfoils which are more
curved on the BOTTOM than the top.
5.Air which is divided by the leading edge must recombine at the
trailing edge. INCORRECT.
Incorrect, since mathematical models and wind tunnel experiments both
show that the upper and lower air flows do not recombine. See these
wind-tunnel photos which illustrate this
lack of recombining.
- Asymmetrical airfoils produce lift because of their special shape, while
symmetrical airfoils produce lift because they are tilted? INCORRECT.
- A symmetrical airfoil cannot create lift? INCORRECT
- Aircraft cannot fly upside down? INCORRECT
- The decreased pressure above an airfoil creates much more lifting force
than the increased pressure below the airfoil. Since the decreased
pressure above is supposedly caused by the Bernoulli effect, while the
increased pressure below is supposedly caused by collision of air with
the tilted wing, the "Bernoulli effect" supplies the lift. Therefore
the "angle of attack" effects are of less importance and can be ignored
in order to simplify the explanation? INCORRECT.
Incorrect, because both the increased pressure below the airfoil and
the decreased pressure above are created entirely by the Bernoulli
effect. ALSO, both are caused by the angle of attack and the forces
resulting from the deflection of massive air. 100% of the lifting
force can be explained by appeals to the Bernoulli effect. But also
100% of the lifting force can be explain by the process of deflection
of air by the wing. However, explaining the difference in air speed
above and below the wing is not straightforward.
- The low pressure above an airfoil produces suction. The lifting force
is an upwards suction force. INCORRECT.
Incorrect. Air molecules produce pressure upon a surface by colliding
with that surface. They do not attract that surface. In other words,
SUCTION DOES NOT EXIST. When you suck air through a straw,
you are lowering the pressure within the straw. There is no suction.
Instead, the outside atmosphere PUSHES the air into the straw.
So, while it is true that the pressure above the wing is low, it is
not true that the lifting force is caused by suction. Instead, the
lifting force is caused by the pressure-difference. If the pressure
above the wing should fall, then the ambient pressure below the wing
will force the airplane to move upwards.
- The air in front of the leading edge of an airfoil and the air behind
the trailing edge are moving at zero degrees deflection? INCORRECT.
Incorrect, since with a real aircraft, the air moves slightly upwards
to meet the leading edge of the wing, but then it is projected greatly
downwards from the trailing edge, creating a "downwash" flow.
Although the "upwash" equals the "downwash" in a 2-dimensional wind
tunnel experiment, this is not true in practice with real airplanes.
(2D wind tunnels depict ground-effect flight, not normal flight.)
With a real airplane flying high above the earth, if the "upwash" and
the "downwash" flows were equal, yet the lifting force was non-
zero, then this would totally violate the law of conservation of
momentum. Unfortunately for the "airfoil-shape" camp, fundamental
physics principles must be satisfied, and Newton's laws are not
selectively violated by airfoils. In order to create an upwards
lifting force, there must be a net downward acceleration of parcels of
air. Planes fly by pushing air downwards, which creates a pressure
difference across a wing. Air-deflection and pressure are linked.
You cannot have one without the other.
- Airplane propellors, rudders, jet turbine blades, and helicopters all
function by deflecting air to create force. They throw the air one way,
and the air pushes them the other way. But airplane wings are
different? Wings operate by a separate kind of physics, and are "sucked
upwards" by the Bernoulli effect? INCORRECT.
Incorrect, because the real world cannot tell the difference between
an airplane wing and a helicopter blade. It does not know that a
ship's rudder and an airplane wing are different. Wings, rudders,
propellors, oars; all these devices work by identical principles:
they throw massive fluid one way, and are thrown the other way by
action/reaction forces. Bernoulli's equation does have bearing, since
the action/reaction forces express themselves as a pressure difference
across the surfaces of the object which deflects the fluid.
- An airfoil can generate lift without deflecting air downward? INCORRECT.
Incorrect. If it did so, it would be staying in the air without
ejecting mass downwards, and this would violate the Conservation
of Momentum law. Yes, balloons remain aloft without ejecting mass,
but balloons function via bouyancy forces, and an airplane wing
obviously does not. Think about it: a helicopter hovers because it
throws air downwards. Yet a 'copter blade is simply a moving wing!
If wings did not fling air downwards, if wings remained aloft only
through pressure differences, then helicopter blades would do the
same, and there would be no downblast below a helicopter.
- An airfoil can generate a lifting force without causing a reaction
force against the air? INCORRECT.
Incorrect. If it did so, it would violate Newton's Third Law of
Motion, the law of equal action and reaction forces.
- The majority of textbooks use the popular 'path length' or 'airfoil
shape' explanation of lift, and it is inconceivable that this many books
could be wrong. Therefore, the "path length" explanation is the
correct one? INCORRECT.
Incorrect, this argument from authority is simply wrong. It is also
dangerous, since it convinces us to never question authority and to
close our eyes to authors' errors. If we trust the concensus
agreements of others, then we become sheep which follow a leaderless
herd. Beware of this habit! As the NASA space shuttle managers who
closed their eyes to the Challanger booster seal problem found out,
the real world is all too real. Nature ignores politics, and
scientific facts are determined by evidence, not by majority votes.
?. The upper surface of a wing will deflect air, but the lower surface is
horizontal, so it has little effect. INCORRECT.
Incorrect, but for an interesting reason. If a thin flat wing
deflects air downwards, diagrams show that the air above the wing and
the air below the wing are equally deflected. If we then make this
wing thicker and streamlined, the total amount of deflected air and
the lifting force remain the same... but the air below the wing
appears less deflected, and the air above the wing appears more
deflected. This happens because a thick wing must push air out
of its way, and as the flowing air curves away from the oncoming
wing, it takes a straighter path in the region below the wing.
This has no effect on the lifting force, since the air above the
wing takes a more curved path, so the pressure difference remains
the same. The thick wing SEEMS to get more lift from the curved
streamlines above than from the straight streamlines below, but
this is an illusion. The lift comes from the DIFFERENCE between
the two flows, and changing the thickness of the wing will alter
the appearance of the air flows without changing the difference or
changing the lifthing force.
- The 'Coanda effect' only involves narrow jets of air, and has little to
do with airfoil operation, so its exclusion from explanations of lift is
understandable and justified? INCORRECT.
Incorrect, the Coanda effect involves the adhesion of a flow to a
surface. It applies to ANY flowing fluid, not just to narrow jets.
If the airflow across a wing did not adhere to the wing, the wing
would be permanently in the 'stall' regiem of operation. During
"stall", it would not deflect air across its upper surface, and it
would produce a greatly diminished lifting force.
- There are two explanations of airfoil lifting force: angle of attack, and
pressure differential. The 'pressure differential' explanation is correct,
and the 'angle of attack' is misleading and can be ignored? INCORRECT.
Incorrect. Both explanations are useful once the incorrect parts of
the "path length" explanation have been removed. They are two
different "mental models," they are two different ways of looking at
one complicated situation. Paraphrasing the physicist R. Feynman:
"Unless you have several different ways of looking at something, you
don't really understand it." A complete understanding requires that
we easily shift between alternate viewpoints. Wings really do
produce lift when velocity differences create a vertically-
directed pressure differential across their surface area. But also,
they really do produce lift by reacting against air and driving it
downwards. Unfortunately the airfoil-shape-based explanation has
become connected with several incorrect add-on explanations; the
"path-length" fallacy for example.
- An airfoil can generate lift at zero angle of attack? MISLEADING
Not entirely wrong: depending on how we define 'angle of
attack', a wing may be at zero angle of attack even though it
obviously *acts* tilted and deflects the oncoming air downwards.
This is a fight between semantics and reality. If the rear portion of
a wing is tilted downwards and deflects the air downwards, shouldn't
it by definition have a positive angle of attack?
No, not if 'angle of attack' is measured by drawing a line between the
tips of the leading and trailing edges of the wing crossection. If
the leading edge is bulbous, then small details on the leading edge
can radically change the location of the drawn line without radically
changing the interaction of the wing with the air. If such a wing is
then rotated to force it to take a "zero" angle, that rotation in
reality tilts the wing to a positive attack angle and generates lift.
- Cambered airfoils produce lift at zero AOA, which proves that the
"Newton" explanation is wrong? INCORRECT
Incorrect. Air has mass, and this means that it has inertia. Because
of inertia, an exhaust port can produce a narrow jet of air, yet an
intake port cannot pull a narrow jet inwards from a distance. This
concept applies to wings. When a cambered airfoil moves forwards at
zero AOA (Angle of Attack,) air moves up towards the leading edge, and
air also flows downwards off of the trailing edge. The air which
flows downwards behind the wing keeps moving downwards, and so the
rear half of the wing controls the angle of the downwash, while the
leading edge has little effect. (In aerodynamics, this is called the
"Kutta Condition.") In a cambered wing at zero AOA, the rear half of
the wing behaves as an airfoil with positive AOA. On the whole, the
cambered airfoil BEHAVES as if it has a positive AOA, even though the
geometrical angle of attack is zero.
- A properly shaped airfoil gives increased lift because the air on the
upper surface moves faster than the air on the lower? MISLEADING
Not entirely wrong. This is only half the story. A properly
shaped airfoil gives increased lift because the airflow does not
easily "detach" from the upper surface, so the upper airflow can
generate lift even at large angles of attack and at low aircraft
speeds. A sheet of plywood makes a poor wing because the airflow will
"detach" from the upper surface of the wood when the sheet is tilted
more than a tiny bit. This is called "stall", and it causes the upper
surface of the wing to stop contributing a lifting force. A properly
designed wing must spread the net deflection of air widely across its
upper leading surface rather than concentrating all the deflection at
its leading edge. Hence, the upper surfaces of most wings are
designed with the curvature which avoids immediate flow-detachment and
stall. The shape of wings does not create lift, instead it only
- The "Newton" explanation is wrong because downwash occurs BEHIND the
wing, where it can have no effects? Downwash can't generate a lifting
Wrong, and silly as well! The above statement caught fire on the
sci.physics newsgroup. Think for a moment: the exhaust from a rocket
or a jet engine occurs BEHIND the engine. Does this mean that
action/reaction does not apply to jets and rockets? Of course not.
It's true that the exhaust stream doesn't directly push on the inner
surface of a rocket engine. The lifting force in rockets is caused
by acceleration of mass, and within the exhaust plume the mass
is no longer accelerating. In rocket engines, the lifting force
appears in the same place that the exhaust is given high velocity:
where gases interact inside the engine.
And with aircraft, the lifting force appears in the same place that
the exhaust (the downwash) is given high downwards velocity. If a
wing encounters some unmoving air, and the wing then throws the air
downwards, the velocity of the air has been changed, and the wing will
experience an upwards reaction force. At the same time, a downwash-
flow is created. To calculate the lifting force of a rocket engine,
we can look exclusively at the exhaust velocity and mass, but this
doesn't mean that the rocket exhaust creates lift. It just means that
the rocket exhaust is directly proportional to lift (since the exhaust
velocity and the lifting force have a common origin.) The same is
true with airplane wings and downwash. To have lift at high
altitudes, we MUST have downwash, and if we double the downwash, we
double the lifting force. But downwash doesn't cause lift, instead
the wing's interaction with the air both creates a lifting force and
gives the air a downwards velocity (by F=MA, don't you know!) | http://amasci.com/wing/airfoil.html.save | 13 |
52 | Chem1 General Chemistry Virtual Textbook → properties of gases → - - - - -
The Basic Gas Laws
Properties of gases
On this page:
The "pneumatic" era of chemistry began with the discovery of the vacuum around 1650 which clearly established that gases are a form of matter. The ease with which gases could be studied soon led to the discovery of numerous empirical (experimentally-discovered) laws that proved fundamental to the later development of chemistry and led indirectly to the atomic view of matter. These laws are so fundamental to all of natural science and engineering that everyone learning these subjects needs to be familiar with them.
Robert Boyle (1627-91) showed that the volume of air trapped by a liquid in the closed short limb of a J-shaped tube decreased in exact proportion to the pressure produced by the liquid in the long part of the tube. The trapped air acted much like a spring, exerting a force opposing its compression. Boyle called this effect “the spring of the air", and published his results in a pamphlet of that title.
|volume||pressure||P × V|
The difference between the heights of the two mercury columns gives the pressure (76 cm = 1 atm), and the volume of the air is calculated from the length of the air column and the tubing diameter.
Some of Boyle's actual data are shown in the table.
Boyle's law can be expressed as
PV = constant
P1V1 = P2V2
These relations hold true only if the number of molecules n and the temperature are constant. This is a relation of inverse proportionality; any change in the pressure is exactly compensated by an opposing change in the volume. As the pressure decreases toward zero, the volume will increase without limit. Conversely, as the pressure is increased, the volume decreases, but can never reach zero. There will be a separate P-V plot for each temperature; a single P-V plot is therefore called an isotherm.
Shown here are some isotherms for one mole of an ideal gas at several different temperatures. Each plot has the shape of a hyperbola— the locus of all points having the property x y = a, where a is a constant. You will see later how the value of this constant (PV=25 for the 300K isotherm shown here) is determined.
It is very important that you understand this kind of plot which governs any relationship of inverse proportionality. You should be able to sketch out such a plot when given the value of any one (x,y) pair.
A related type of plot with which you should be familiar shows the product PV as a function of the pressure. You should understand why this yields a straight line, and how this set of plots relates to the one immediately above.
In an industrial process, a gas confined to a volume of 1 L at a pressure of 20 atm is allowed to flow into a 12-L container by opening the valve that connects the two containers. What will be the final pressure of the gas?
Solution: The final volume of the gas is (1 + 12)L = 13 L. The gas expands in inverse proportion two volumes
P2 = (20 atm) × (1 L ÷ 13 L) = 1.5 atm
Note that there is no need to make explicit use of any "formula" in problems of this kind!
2 How the temperature affects the volume:
All matter expands when heated, but gases are special in that their degree of expansion is independent of their composition. The French scientists Jacques Charles (1746-1823) and Joseph Gay-Lussac (1778-1850) independently found that if the pressure is held constant, the volume of any gas changes by the same fractional amount (1/273 of its value) for each C° change in temperature.
A graphical expression of the law of Charles and Gay-Lussac can be seen in these plots of the volume of one mole of an ideal gas as a function of its temperature at various constant pressures.
The air pressure in a car tire is 30 psi (pounds per square inch) at 10°C. What will be pressure be after driving has raised its temperature to 45°C ? (Assume that the volume remains unchanged.)
Solution:The gas expands in direct proportion to the ratio of the absolute temperatures:
P2 = (30 psi) × (318K ÷ 283K) = 33.7 psi
The relation between the temperature of a gas and its volume has long been known. In 1702, Guillaume Amontons (1163-1705), who is better known for his early studies of friction, devised a thermometer that related the temperature to the volume of a gas. Robert Boyle had observed this inverse relationship in 1662, but the lack of any uniform temperature scale at the time prevented them from establishing the relationship as we presently understand it.
Jacques Charles discovered the law that is named for him in the 1780s, but did not publish his work. John Dalton published a form of the law in 1801, but the first thorough published presentation was made by Gay-Lussac in 1802, who acknowledged Charles' earlier studies.
The buoyancy that lifts a hot-air balloon into the sky depends on the difference between the density (mass ÷ volume) of the air entrapped within the balloon's envelope, compared to that of the air surrounding it. When a balloon on the ground is being prepared for flight, it is first partially inflated by an external fan, and possesses no buoyancy at all. Once the propane burners are started, this air begins to expand according to Charles' law. After the warmed air has completely inflated the balloon, further expansion simply forces excess air out of the balloon, leaving the weight of the diminished mass of air inside the envelope smaller than that of the greater mass of cooler air that the balloon displaces.
Jacques Charles collaborated with the Montgolfier Brothers whose hot-air balloon made the world's first manned balloon flight in June, 1783 (left). Ten days later, Charles himself co-piloted the first hydrogen-filled balloon. Gay-Lussac (right), who had a special interest in the composition of the atmosphere, also saw the potential of the hot-air balloon, and in 1804 he ascended to a then-record height of 6.4 km. [images: Wikimedia]
Click on the green-outlined images to enlarge them
Jacques Charles' understanding of buoyancy led to his early interest in balloons and to the use of hydrogen as an inflating gas. His first flight was witnessed by a crowd of 40,000 which included Benjamin Franklin, at that time ambassador to France. On its landing in the countryside, the balloon was reportedly attacked with axes and pitchforks by terrified peasants who believed it to be a monster from the skies. Charles' work was mostly in mathematics, but he managed to invent a number of scientific instruments and confirmed experiments in electricity that had been performed earlier by Benjamin Franklin and others.[*]
Gay-Lussac's balloon flights enabled him to sample the composition of the atmosphere at different altitudes (he found no difference). His Law of Combining Volumes (described below) constituted one of the foundations of modern chemistry. Gay-Lussac's contributions to chemistry are numerous; his work in electrochemistry enabled him to produce large quantities of sodium and potassium; the availability of these highly active metals led to his co-discovery of the element boron. He was also the the first to recognize iodine as an element. In an entirely different area, he developed a practical method of measuring the alcohol content of beverages, and coined the names "pipette" and "burette" that are known to all chemistry students. A good biography can be viewed at Google Books.
For a good tutorial overview of the Law of Combining Volumes, see this ChemPaths page.
In the same 1808 article in which Gay-Lussac published his observations on the thermal expansion of gases, he pointed out that when two gases react, they do so in volume ratios that can always be expressed as small whole numbers. This came to be known as the Law of combining volumes.
These "small whole numbers" are of course the same ones that describe the "combining weights" of elements to form simple compounds, as described in the lesson dealing with simplest formulas from experimental data.
The Italian scientist Amedeo Avogadro (1776-1856) drew the crucial conclusion: these volume ratios must be related to the relative numbers of molecules that react, and thus the famous "E.V.E.N principle":
Equal volumes of gases, measured at the same temperature and pressure, contain equal numbers of molecules
Avogadro's law thus predicts a directly proportional relation between the number of moles of a gas and its volume.
This relationship, originally known as Avogadro's Hypothesis, was crucial in establishing the formulas of simple molecules at a time (around 1811) when the distinction between atoms and molecules was not clearly understood. In particular, the existence of diatomic molecules of elements such as H2, O2, and Cl2 was not recognized until the results of combining-volume experiments such as those depicted below could be interpreted in terms of the E.V.E.N. principle.
Early chemists made the mistake of assuming that the formula of water is HO. This led them to miscalculate the molecular weight of oxygen as 8 (instead of 16). If this were true, the reaction H + O → HO would correspond to the following combining volumes results according to the E.V.E.N principle:
But a similar experiment on the formation of hydrogen chloride from hydrogen and chlorine yielded twice the volume of HCl that was predicted by the the assumed reaction H + Cl → HCl. This could be explained only if hydrogen and chlorine were diatomic molecules:
This made it necessary to re-visit the question of the formula of water. The experiment immediately confirmed that the correct formula of water is H2O:
This conclusion was also seen to be consistent with the observation, made a few years earlier by the English chemists Nicholson and Carlisle that the reverse of the above reaction, brought about by the electrolytic decomposition of water, yields hydrogen and oxygen in a 2:1 volume ratio.
A nice overview of these developments can be seen at David Dice's Chemistry page, from which this illustration is taken.
If the variables P, V, T and n (the number of moles) have known values, then a gas is said to be in a definite state, meaning that all other physical properties of the gas are also defined. The relation between these state variables is known as an equation of state. By combining the expressions of Boyle's, Charles', and Avogadro's laws (you should be able to do this!) we can write the very important ideal gas equation of state
where the proportionality constant R is known as the gas constant. This is one of the few equations you must commit to memory in this course; you should also know the common value and units of R.
Take note of the word "hypothetical" here. No real gas (whose molecules occupy space and interact with each other) can behave in a truly ideal manner. But we will see in the last lesson of this series that all gases behave more and more like an ideal gas as the pressure approaches zero. A pressure of only 1 atm is sufficiently close to zero to make this relation useful for most gases at this pressure.
Many textbooks show formulas, such as P1V1 = P2V2 for Boyle's law. Don't bother memorizing them; if you really understand the meanings of these laws as stated above, you can easily derive them on the rare occasions when they are needed. The ideal gas equation is the only one you need to know.
In order to depict the relations between the three variables P, V and T we need a three-dimensional graph.
Each point on the curved surface represents a possible combination of (P,V,T) for an arbitrary quantity of an ideal gas. The three sets of lines inscribed on the surface correspond to states in which one of these three variables is held constant.
The red curved lines, being lines of constant temperature, or isotherms, are plots of Boyle's law. These isotherms are also seen projected onto the P-V plane at the top right.
The yellow lines are isochors and represent changes of the pressure with temperature at constant volume.
The green lines, known as isobars, and projected onto the V-T plane at the bottom, show how the volumes contract to zero as the absolute temperature approaches zero, in accordance with the law of Charles and Gay-Lussac.
A biscuit made with baking powder has a volume of 20 mL, of which one-fourth consists of empty space created by gas bubbles produced when the baking powder decomposed to CO2. What weight of NaHCO3 was present in the baking powder in the biscuit? Assume that the gas reached its final volume during the baking process when the temperature was 400°C.
(Baking powder consists of sodium bicarbonate mixed with some other solid that produces an acidic solution on addition of water, initiating the reaction
NaHCO3(s) + H+ → Na+ + H2O + CO2
Solution: Use the ideal gas equation to find the number of moles of CO2 gas; this will be the same as the number of moles of NaHCO3 (84 g mol–1) consumed :
9.1E–6 mol × 84 g mol–1 = 0.0076 g
Make sure you thoroughly understand the following essential ideas which have been presented above, and be able to state them in your own words. It is especially important that you know the precise meanings of all the green-highlighted terms in the context of this topic. | http://www.chem1.com/acad/webtext/gas/gas_2.html | 13 |
64 | We’ve seen one Python function, abs(), that is also a standard mathematical function. The usual mathematical notation is . Some mathematical functions are difficult to represent with simple lines of text, so the folks who invented Python elected to use “prefix” notation, putting the name of the function first.
This function syntax is pervasive in Python, and we’ll see many operations that are packaged in the form of functions. We’ll look at many additional function definitions throughout this book. In this chapter, we’ll focus on built-in functions.
We’ll look at a few basic functions in Say It With Functions; we’ll show how formal definitions look in pow() and round() Definitions. We’ll show how you can evaluate complex expressions in Multiple Steps. We’ll touch in the accuracy issue in Accuracy?. We’ll look at how Python gives you flexibility through optional features in Another Round, Please.
There are a number of conversion or factory functions that we’ll describe in Functions are Factories (really!). In Going the Other Way we’ll see how we can use conversion functions to make strings from numbers. Finally, in Most and Least, we’ll look at functions to find the maximum or minimum of a number of values.
Many of the Python processing operations that we might need are provided in the form of functions. Functions are one of the ways that Python lets us specify how to process some data. A function, in a mathematical sense, is a transformation from some input to an output. The mathematicians sometimes call this a mapping, because the function is a kind of map from the input value to the output value.
We looked at the abs() function in the previous section. It maps negative and positive numbers to their absolute magnitude, measured as a positive number. The abs() function maps -4 to 4, and 3.25 to 3.25.
>>> abs(-18) 18 >>> pow(16, 3) 4096 >>> round(9.424) 9.0 >>> round(12.57) 13.0
A function is an expression, with the same syntactic role as any other expression, for example 2+3. You can freely combine functions with other expressions to make more complex expressions. Additionally, the arguments to a function can also be expressions. Therefore, we can combine functions into more complex expressions pretty freely. This takes some getting used to, so we’ll look at some examples.
1 2 3 4 5 6 7 8
>>> 3*abs(-18) 54 >>> pow(8*2, 3)*1.5 6144.0 >>> round(66.2/7) 9.0 >>> 8*round(abs(50.25)/4.0, 2) 100.48
The function names provide a hint as to what they do. Here are the formal definitions, the kind of thing you’ll see in the Python reference manuals.
Raises x to the y power.
If z is present, this is done modulo z: .
Rounds number to ndigits to the right of the decimal point.
The [ and ]‘s are how we show that some parts of the syntax are optional. We’ll summarize this in Function Syntax Rules.
Function Syntax Rules
We’ll show optional parameters to functions by surrounding them with [ and ]. We don’t actually enter the [ and ]‘s; they’re just hints as to what the alternative forms of the function are.
Rounds number to ndigits to the right of the decimal point.
In the case of the round() function, the syntax summary shows us there are two different ways to use this function:
>>> round(2.459, 2) 2.46 >>> round(2.459) 2.0
Note that there is some visual ambiguity between using [ and ] in our Python programming and using [ and ] as markup for the grammar rules. Usually the context makes it clear.
>>> 2L**32 4294967296L >>> pow(2L, 32) 4294967296L
Note that pow(x,0.5) is the square root of x. Also, the function math.sqrt() is the square root of x. The pow() function is one of the built-in functions, while the square root function is only available in the math library. We’ll look at the math library in The math Module – Trig and Logs.
In the next example we’ll get the square root of a number, and then square that value. It’ll be a two-step calculation, so we can see each intermediate step.
>>> pow(2, 0.5) 1.4142135623730951 >>> _ ** 2 2.0000000000000004
The first question you should have is “what does that _ mean?”
The _ is a Python short-cut. During interactive use, Python uses the name _ to mean the result it just printed. This saves us retyping things over and over. In the case above, the “previous result” was the value of pow( 2, 0.5 ). By definition, we can replace a _ with the entire previous expression to see what is really happening.
>>> pow(2, 0.5) ** 2 2.0000000000000004
Until we start writing scripts, this is a handy thing. When we start writing scripts, we won’t be able to use the _, instead we’ll use something that’s a much more clear and precise.
Let’s go back to the previous example: we’ll get the square root of a number, and then square that value.
>>> pow( 3, 0.5 ) 1.7320508075688772 >>> _ ** 2 2.9999999999999996 >>> pow( 3, 0.5 ) ** 2 2.9999999999999996
Here’s a big question: what is that ”.9999999999999996” foolishness?
That’s the left-overs from the conversion from our decimal notation to the computer’s internal binary and back to human-friendly decimal notation. We talked about it briefly in Floating-Point Numbers, Also Known As Scientific Notation. If we know the order of magnitude of the result, we can use the round function to clean up this kind of small error. In this case, we know the answer is supposed to be a whole number, so we can round it off.
>>> pow( 3, 0.5 ) ** 2 2.9999999999999996 >>> round(_) 3.0
Debugging Function Expressions
If you look back at Syntax Rule 6, you’ll note that the ()s need to be complete. If you accidentally type something like round(2.45 with no closing ), you’ll see the following kind of exchange.
>>> round(2.45 ... ... ... ) 2.0
The ... is Python’s hint that the statement is incomplete. You’ll need to finish the ()s so that the statement is complete.
Above, we noted that the round() function had an optional argument. When something’s optional, we can look at it as if there are two forms of the round() function: a one-argument version and a two-argument version.
>>> round(678.456) 678.0 >>> round(678.456, 2) 678.46000000000004 >>> round(678.456, -1) 680.0
So, rounding off to -1 decimal places means the nearest 10. Rounding off to -2 decimal places is the nearest 100. Pretty handy for doing business reports where we have to round off to the nearest million.
How do we get Python to do specific conversions among our various numeric data types? When we mix whole numbers and floating-point scientific notation, Python automatically converts everything to floating-point. What if we want the floating-point number truncated down to a whole number instead?
Here’s another example: what if we want the floating-point number transformed into a long integer instead of the built-in assumption that we want long integers turned into floating-point numbers? How do we control this coercion among numbers?
We’ll look at a number of factory functions that do number conversion. Each function is a factory that creates a new number from an existing number. Eventually, we’ll identify numerous varieties of factory functions.
These factory functions will also create numbers from string values. When we write programs that read their input from files, we’ll see that files mostly have strings. Factory functions will be an important part of reading strings from files and creating numbers from those strings so that we can process them.
Creates a floating-point number equal to the string or number x. If a string is given, it must be a valid floating-point number: digits, decimal point, and an exponent expression. You can use this function when doing division to prevent getting the simple integer quotient.
>>> float(22)/7 3.1428571428571428 >>> 22/7 3 >>> float("6.02E24") 6.0200000000000004e+24
Creates an integer equal to the string or number x. This will chop off all of the digits to the right of the decimal point in a floating-point number. If a string is given, it must be a valid decimal integer string.
>>> int('1234') 1234 >>> int(3.14159) 3
Creates a long integer equal to the string or number x. If a string is given, it must be a valid decimal integer. The expression long(2) has the same value as the literal 2L. Examples: long(6.02E23), long(2).
>>> long(2)**64 18446744073709551616L >>> long(22.0/7.0) 3L
The first example shows the range of values possible with 64-bit integers, available on larger computers. This is a lot more than the paltry two billion available on a 32-bit computer.
Complex Numbers - Math wizards only. Complex is not as simple as the others. A complex number has two parts, real and imaginary. Conversion to complex typically involves two parameters.
Creates a complex number with the real part of real; if the second parameter, imag, is given, this is the imaginary part of the complex number, otherwise the imaginary part is zero.
If this syntax synopsis with the [ and ] is confusing, you’ll need to see Function Syntax Rules.
>>> complex(3,2) (3+2j) >>> complex(4) (4+0j)
Note that the second parameter, with the imaginary part of the number, is optional. This leads to two different ways to evaluate this function. In the example above, we used both variations.
Conversion from a complex number (effectively two-dimensional) to a one-dimensional integer or float is not directly possible. Typically, you’ll use abs() to get the absolute value of the complex number. This is the geometric distance from the origin to the point in the complex number plane. The math is straight-forward, but beyond the scope of this introduction to Python.
>>> abs(3+4j) 5.0
If the int() function turns a string of digits into a proper number, can we do the opposite thing and turn an ordinary number into a string of digits?
The str() and repr() functions convert any Python object to a string. The str() string is typically more readable, where the repr() result can help us see what Python is doing under the hood. For most garden-variety numeric values, there is no difference. For the more complex data types, however, the results of repr() and str() can be different.
Here are some examples of converting floating-point expressions into strings of digits.
>>> str( 22/7.0 ) '3.14285714286' >>> repr( 355/113. ) '3.1415929203539825'
Note that the results are surrounded by ' marks. These apostrophes tell us that these aren’t actually numbers; they’re strings of digits.
What’s the difference? Try this and see.
A string of digits may look numeric to you, but Python won’t look inside a string to see if it “looks” like a number. If it is a string (with " or '), it is not a number, and Python won’t attempt to do any math.
Here are the formal definitions of these two functions. These aren’t very useful now, but we’ll return to them time and again as we learn more about how Python works.
Creates a string representation of object.
Creates a string representation of object, usually in Python syntax.
The max() and min() functions accept any number of values and return the largest or smallest of the values. These functions work with any type of data. Be careful when working with strings, because these functions use alphabetical order, which has some surprising consequences.
>>> max( 10, 11, 2 ) 11 >>> min( 'asp', 'adder', 'python' ) 'adder' >>> max( '10', '11', '2' ) '2'
The last example (max( '10', '11', '2' )) shows the “alphabetical order of digits” problem. Superficially, this looks like three numbers (10, 11 and 2). But, they are quoted strings, and might as well be words. What would be result of max( 'ba', 'bb', 'c' ) be? Anything surprising about that? The alphabetic order rules apply when we compare string values. If we want the numeric order rules, we have to supply numbers instead of strings.
Here are the formal definitions for these functions.
Returns the object with the largest value in sequence.
Returns the object with the smallest value in sequence .
Write an expression to convert the mixed fraction 3 5/8 to a floating-point number.
Evaluate (22.0/7.0)-int(22.0/7.0). What is this value? Compare it with 22.0/7.0. What general principal does this illustrate?
Try illegal conversions like int('A'), int(3+4j ), int( 2L**64 ). Why are exceptions raised? Why can’t a simple default value like zero be used instead? | http://www.itmaybeahack.com/book/programming-2.6/html/p03_expressions/p03_c02_func.html | 13 |
84 | Imaginary numbers have an intuitive explanation: they “rotate” numbers, just like negatives make a “mirror image” of a number. This insight makes arithmetic with complex numbers easier to understand, and is a great way to double-check your results. Here’s our cheatsheet:
This post will walk through the intuitive meanings.
In regular algebra, we often say “x = 3″ and all is dandy — there’s some number “x”, whose value is 3. With complex numbers, there’s a gotcha: there’s two dimensions to talk about. When writing
we’re saying there’s a number “z” with two parts: 3 (the real part) and 4i (imaginary part). It is a bit strange how “one” number can have two parts, but we’ve been doing this for a while. We often write:
and it doesn’t bother us that a single number “y” has both an integer part (3) and a fractional part (.4 or 4/10). Y is a combination of the two. Complex numbers are similar: they have their real and imaginary parts “contained” in a single variable (shorthand is often Re and Im).
Unfortunately, we don’t have nice notation like (3.4) to “merge” the parts into a single number. I had an idea to write the imaginary part vertically, in fading ink, but it wasn’t very popular. So we’ll stick to the “a + bi” format.
Because complex numbers use two independent axes, we find size (magnitude) using the Pythagorean Theorem:
So, a number z = 3 + 4i would have a magnitude of 5. The shorthand for “magnitude of z” is this: |z|
See how it looks like the absolute value sign? Well, in a way, it is. Magnitude measures a complex number’s “distance from zero”, just like absolute value measures a negative number’s “distance from zero”.
Complex Addition and Subtraction
We’ve seen that regular addition can be thought of as “sliding” by a number. Addition with complex numbers is similar, but we can slide in two dimensions (real or imaginary). For example:
Adding (3 + 4i) to (-1 + i) gives 2 + 5i.
Again, this is a visual interpretation of how “independent components” are combined: we track the real and imaginary parts separately.
Subtraction is the reverse of addition — it’s sliding in the opposite direction. Subtracting (1 + i) is the same as adding -1 * (1 + i), or adding (-1 – i).
Here’s where the math gets interesting. When we multiply two complex numbers (x and y) to get z:
- Add the angles: angle(z) = angle(x) + angle(y)
- Multiply the magnitudes: |z| = |x| * |y|
That is, the angle of z is the sum of the angles of x and y, and the magnitude of z is the product of the magnitudes. Believe it or not, the magic of complex numbers makes the math work out!
Multiplying by the magnitude (size) makes sense — we’re used to that happening in regular multiplication (3 × 4 means you multiply 3 by 4′s size). The reason the angle addition works is more detailed, and we’ll save it for another time. (Curious? Find the sine and cosine addition formulas and compare them to how (a + bi) * (c + di) get multiplied out).
Time for an example: let’s multiply z = 3 + 4i by itself. Before doing all the math, we know a few things:
- The resulting magnitude will be 25. z has a magnitude of 5, so |z| * |z| = 25.
- The resulting angle will be above 90. 3 + 4i is above 45 degrees (since 3 + 3i would be 45 degrees), so twice that angle will be more than 90.
With our predictions on paper, we can do the math:
Time to check our results:
- Magnitude: sqrt((-7 * -7) + (24 * 24)) = sqrt(625) = 25, which matches our guess.
- Angle: Since -7 is negative and 24i is positive, we know we are going “backwards and up”, which means we’ve crossed 90 degrees (“straight up”). Getting geeky, we compute atan(24/-7) = 106.2 degrees (keeping in mind we’re in quadrant 2). This guess checks out too.
Nice. While we can always do the math out, the intuition about rotations and scaling helps us check the result. If the resulting angle was less than 90 (“forward and up”, for example), or the resulting magnitude not 25, we’d know there was a mistake in our math.
Division is the opposite of multiplication, just like subtraction is the opposite of addition. When dividing complex numbers (x divided by y), we:
- Subtract angles angle(z) = angle(x) – angle(y)
- Divide by magnitude |z| = |x| / |y|
Sounds good. Now let’s try to do it:
Hrm. Where to start? How do we actually do the division? Dividing regular algebraic numbers gives me the creeps, let alone weirdness of i (Mister mister! Didya know that 1/i = -i? Just multiply both sides by i and see for yourself! Eek.). Luckily there’s a shortcut.
Introducing Complex Conjugates
Our first goal of division is to subtract angles. How do we do this? Multiply by the opposite angle! This will “add” a negative angle, doing an angle subtraction.
Instead of z = a + bi, think about a number z* = a – bi, called the “complex conjugate”. It has the same real part, but is the “mirror image” in the imaginary dimension. The conjugate or “imaginary reflection” has the same magnitude, but the opposite angle!
So, multiplying by a – bi is the same as subtracting an angle. Neato.
Complex conjugates are indicated by a star (z*) or bar above the number — mathematicians love to argue about these notational conventions. Either way, the conjugate is the complex number with the imaginary part flipped:
has complex conjugate
Note that b doesn’t have to be “negative”. If z = 3 – 4i, then z* = 3 + 4i.
Multiplying By the Conjugate
What happens if you multiply by the conjugate? What is z times z*? Without thinking, think about this:
So we take 1 (a real number), add angle(z), and add angle (z*). But this last angle is negative — it’s a subtraction! So our final result should be a real number, since we’ve canceled the angles. The number should be |z|^2 since we scaled by the size twice.
Now let’s do an example:
We got a real number, like we expected! The math fans can try the algebra also:
Tada! The result has no imaginary parts, and is the magnitude squared. Understanding complex conjugates as a “negative rotation” lets us predict these results in a different way.
Scaling Your Numbers
When multiplying by a conjugate z*, we scale by the magnitude |z*|. To reverse this effect we can divide by |z|, and to actually shrink by |z| we have to divide again. All in all, we have to divide by |z| * |z| to the original number after multiplying by the conjugate.
Show Me The Division!
I’ve been sidestepping the division, and here’s the magic. If we want to do
We can approach it intuitively:
- Rotate by opposite angle: multiply by (1 – i) instead of (1 + i)
- Divide by magnitude squared: divide by |sqrt(2)|^2 = 2
The answer, using this approach, is:
The more traditional “plug and chug” method is to multiply top and bottom by the complex conjugate:
We’re traditionally taught to “just multiply both sides by the complex conjugate” without questioning what complex division really means. But not today.
We know what’s happening: division is subtracting an angle and shrinking the magnitude. By multiplying top and bottom by the conjugate, we subtract by the angle of (1-i), which happens to make the denominator a real number (it’s no coincidence, since it’s the exact opposite angle). We scaled both the top and bottom by the same amount, so the effects cancel. The result is to turn division into a multiplication in the numerator.
Both approaches work (you’re usually taught the second), but it’s nice to have one to double-check the other.
More Math Tricks
Now that we understand the conjugate, there’s a few properties to consider:
The first should make sense. Adding two numbers and “reflecting” (conjugating) the result, is the same as adding the reflections. Another way to think about it: sliding two numbers then taking the opposite, is the same as sliding both times in the opposite direction.
The second property is trickier. Sure, the algebra may work, but what’s the intuitive explanation?
The result (xy)* means:
- Multiply the magnitudes: |x| * |y|
- Add the angles and take the conjugate (opposite): angle(x) + angle(y) becomes “-angle(x) + -angle(y)”
And x* times y* means:
- Multiply the magnitudes: |x| * |y| (this is the same as above)
- Add the conjugate angles: angle(x*) + angle(y*) = -angle(x) + -angle(y)
Aha! We get the same angle and magnitude in each case, and we didn’t have to jump into the traditional algebra explanation. Algebra is fine, but it isn’t always the most satisfying explanation.
A Quick Example
The conjugate is a way to “undo” a rotation. Think about it this way:
- I deposited $3, $10, $15.75 and $23.50 into my account. What transaction will cancel these out? To find the opposite: add them up, and multiply by -1.
- I rotated a line by doing several multiplications: (3 + 4i), (1 + i), and (2 + 10i). What rotation will cancel these out? To find the opposite: multiply the complex numbers together, and take the conjugate of the result.
See the conjugate z* as a way to “cancel” the rotation effects of z, just like a negative number “cancels” the effects of addition. One caveat: with conjugates, you need to divide by |z| * |z| to remove the scaling effects as well.
The math here isn’t new, but I never realized why complex conjugates worked as they did. Why a – bi and not -a + bi? Well, complex conjugates are not a random choice, but a mirror image from the imaginary perspective, with the exact opposite angle.
Seeing imaginary numbers as rotations gives us a new mindset to approach problems; the “plug and chug” formulas can make intuitive sense, even for a strange topic like complex numbers. Happy math.
Other Posts In This Series
- A Visual, Intuitive Guide to Imaginary Numbers
- Intuitive Arithmetic With Complex Numbers (This post)
- Understanding Why Complex Multiplication Works
- An Intuitive Guide To Exponential Functions & e
- Demystifying the Natural Logarithm (ln)
- Understanding Exponents (Why does 0^0 = 1?)
- A Visual Guide to Simple, Compound and Continuous Interest Rates
- Using Logarithms in the Real World
- How To Measure Any Distance With The Pythagorean Theorem
- Surprising Uses of the Pythagorean Theorem
- Rescaling the Pythagorean Theorem
- Intuitive Guide to Angles, Degrees and Radians
- Intuitive Understanding Of Euler's Formula
- Intuitive Understanding of Sine Waves
- An Interactive Guide To The Fourier Transform | http://betterexplained.com/articles/intuitive-arithmetic-with-complex-numbers/ | 13 |
60 | In mathematics, a surface integral is a definite integral taken over a surface (which may be a curve set in space). Just as a line integral handles one dimension or one variable, a surface integral can be thought of being double integral along two dimensions. Given a surface, one may integrate over its scalar fields (that is, functions which return numbers as values), and vector fields (that is, functions which return vectors as values).
Surface integrals of scalar fields [change]
Consider a surface S on which a scalar field f is defined. If one thinks of S as made of some material, and for each x in S the number f(x) is the density of material at x, then the surface integral of f over S is the mass per unit thickness of S. (This only true if the surface is an infinitesimally thin shell.) One approach to calculating the surface integral is then to split the surface in many very small pieces, assume that on each piece the density is approximately constant, find the mass per unit thickness of each piece by multiplying the density of the piece by its area, and then sum up the resulting numbers to find the total mass per unit thickness of S.
To find an explicit formula for the surface integral, mathematicians parameterize S by considering on S a system of curvilinear coordinates, like the latitude and longitude on a sphere. Let such a parameterization be x(s, t), where (s, t) varies in some region T in the plane. Then, the surface integral is given by
For example, to find the surface area of some general functional shape, say , we have
where . So that , and . So,
which is the formula used for the surface area of a general functional shape. One can recognize the vector in the second line above as the normal vector to the surface.
Note that because of the presence of the cross product, the above formulas only work for surfaces embedded in three dimensional space.
Surface integrals of vector fields [change]
Consider a vector field v on S, that is, for each x in S, v(x) is a vector.
The surface integral can be defined component-wise according to the definition of the surface integral of a scalar field; the result is a vector. For example, this applies to the electric field at some fixed point due to an electrically charged surface, or the gravity at some fixed point due to a sheet of material. It can also calculate the maagnetic flux through a surface.
Alternatively, mathematicians can integrate the normal component of the vector field; the result is a scalar. An example is a fluid flowing through S, such that v(x) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through S in a unit amount of time.
This illustration implies that if the vector field is tangent to S at each point, then the flux is zero, because the fluid just flows in parallel to S, and neither in nor out. This also implies that if v does not just flow along S, that is, if v has both a tangential and a normal component, then only the normal component contributes to the flux. Based on this reasoning, to find the flux, we need to take the dot product of v with the unit surface normal to S at each point, which will give us a scalar field, and integrate the obtained field as above. This gives the formula
The cross product on the right-hand side of this expression is a surface normal determined by the parametrization.
This formula defines the integral on the left (note the dot and the vector notation for the surface element).
Theorems involving surface integrals [change]
Advanced issues [change]
Changing parameterization [change]
The discussion above defined the surface integral by using a parametrization of the surface S. A given surface might have several parametrizations. For example, when the locations of the North Pole and South Pole are moved on a sphere, the latitude and longitude change for all the points on the sphere. A natural question is then whether the definition of the surface integral depends on the chosen parametrization. For integrals of scalar fields, the answer to this question is simple, the value of the surface integral will be the same no matter what parametrization one uses.
Integrals of vector fields are more complicated, because the surface normal is involved. Mathematicians have proved that given two parametrizations of the same surface, whose surface normals point in the same direction, both parametrizations give the same value for the surface integral. If, however, the normals for these parametrizations point in opposite directions, the value of the surface integral obtained using one parametrization is the negative of the one obtained via the other parametrization. It follows that given a surface, we do not need to stick to any unique parametrization; but, when integrating vector fields, we do need to decide in advance which direction the normal will point to and then choose any parametrization consistent with that direction.
Parameterizations work on parts of the suface [change]
Another issue is that sometimes surfaces do not have parametrizations which cover the whole surface; this is true for example for the surface of a cylinder (of finite height). The obvious solution is then to split that surface in several pieces, calculate the surface integral on each piece, and then add them all up. This is indeed how things work, but when integrating vector fields one needs to again be careful how to choose the normal-pointing vector for each piece of the surface, so that when the pieces are put back together, the results are consistent. For the cylinder, this means that if we decide that for the side region the normal will point out of the body, then for the top and bottom circular parts the normal must point out of the body too.
Inconsistent surface normals [change]
Last, there are surfaces which do not have a surface normal at each point with consistent results (for example, the Möbius strip). If such a surface is split into pieces, on each piece a parametrization and corresponding surface normal is chosen, and the pieces are put back together, the normal vectors coming from different pieces cannot be reconciled. This means that at some junction between two pieces will have normal vectors pointing in opposite directions. Such a surface is called non-orientable. Vector fields can not be integrated on non-orientable surfaces.
Other pages [change]
- Divergence theorem
- Stokes' theorem
- Line integral
- Volume integral
- Cartesian coordinate system
- Volume and surface area elements in a spherical coordinate system
- Volume and surface area elements in a cylindrical coordinate system
- Holstein–Herring method | http://simple.wikipedia.org/wiki/Surface_integral | 13 |
129 | In mathematics, the four color theorem, or the four color map theorem states that, given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map so that no two adjacent regions have the same color. Two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not.
Despite the motivation from coloring political maps of countries, the theorem is not of particular interest to mapmakers. According to an article by the math historian Kenneth May (Wilson 2002, 2), “Maps utilizing only four colours are rare, and those that do usually require only three. Books on cartography and the history of mapmaking do not mention the four-color property.”
Three colors are adequate for simpler maps, but an additional fourth color is required for some maps, such as a map in which one region is surrounded by an odd number of other regions that touch each other in a cycle. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century (Heawood 1890); however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852.
The four color theorem was proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem. Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map (regardless of whether it is a counterexample or not) must have a portion that looks like one of these 1,936 maps. Showing this required hundreds of pages of hand analysis. Appel and Haken concluded that no smallest counterexamples existed because any must contain, yet not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by all mathematicians because the computer-assisted proof was infeasible for a human to check by hand (Swart 1980). Since then the proof has gained wider acceptance, although doubts remain (Wilson 2002, 216–222).
To dispel remaining doubt about the Appel–Haken proof, a simpler proof using the same ideas and still relying on computers was published in 1997 by Robertson, Sanders, Seymour, and Thomas. Additionally in 2005, the theorem was proven by Georges Gonthier with general purpose theorem proving software.
Precise formulation of the theorem
The intuitive statement of the four color theorem, i.e. 'that given any separation of a plane into contiguous regions, called a map, the regions can be colored using at most four colors so that no two adjacent regions have the same color', needs to be interpreted appropriately to be correct. First, all corners, points that belong to (technically, are in the closure of) three or more countries, must be ignored. In addition, bizarre maps (using regions of finite area but infinite perimeter) can require more than four colors.
Second, for the purpose of the theorem every "country" has to be a simply connected region, or contiguous. In the real world, this is not true (e.g., Alaska as part of the United States, Nakhchivan as part of Azerbaijan, and Kaliningrad as part of Russia are not contiguous). Because the territory of a particular country must be the same color, four colors may not be sufficient. For instance, consider a simplified map:
In this map, the two regions labeled A belong to the same country, and must be the same color. This map then requires five colors, since the two A regions together are contiguous with four other regions, each of which is contiguous with all the others. If A consisted of three regions, six or more colors might be required; one can construct maps that require an arbitrarily high number of colors. A similar construction also applies if a single color is used for all bodies of water, as is usual on real maps.
An easier to state version of the theorem uses graph theory. The set of regions of a map can be represented more abstractly as an undirected graph that has a vertex for each region and an edge for every pair of regions that share a boundary segment. This graph is planar (it is important to note that we are talking about the graphs that have some limitations according to the map they are transformed from only): it can be drawn in the plane without crossings by placing each vertex at an arbitrarily chosen location within the region to which it corresponds, and by drawing the edges as curves that lead without crossing within each region from the vertex location to each shared boundary point of the region. Conversely any planar graph can be formed from a map in this way. In graph-theoretic terminology, the four-color theorem states that the vertices of every planar graph can be colored with at most four colors so that no two adjacent vertices receive the same color, or for short, "every planar graph is four-colorable" (Thomas 1998, p. 849; Wilson 2002).
Early proof attempts
The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. At the time, Guthrie's brother, Frederick, was a student of Augustus De Morgan at University College. Francis inquired with Frederick regarding it, who then took it to De Morgan (Francis Guthrie graduated later in 1852, and later became a professor of mathematics in South Africa). According to De Morgan:
"A student of mine [Guthrie] asked me to day to give him a reason for a fact which I did not know was a fact — and do not yet. He says that if a figure be any how divided and the compartments differently coloured so that figures with any portion of common boundary line are differently coloured — four colours may be wanted but not more — the following is his case in which four colours are wanted. Query cannot a necessity for five or more be invented… " (Wilson 2002, p. 18)
"F.G.", perhaps one of the two Guthries, published the question in The Athenaeum in 1854, and De Morgan posed the question again in the same magazine in 1860. Another early published reference by Arthur Cayley (1879) in turn credits the conjecture to De Morgan.
There were several early failed attempts at proving the theorem. One proof was given by Alfred Kempe in 1879, which was widely acclaimed; another was given by Peter Guthrie Tait in 1880. It was not until 1890 that Kempe's proof was shown incorrect by Percy Heawood, and 1891 Tait's proof was shown incorrect by Julius Petersen—each false proof stood unchallenged for 11 years (Thomas 1998, p. 848).
Proof by computer
During the 1960s and 1970s German mathematician Heinrich Heesch developed methods of using computers to search for a proof. Notably he was the first to use discharging for proving the theorem, which turned out to be important in the unavoidability portion of the subsequent Appel-Haken proof. He also expanded on the concept of reducibility and, along with Ken Durre, developed a computer test for it. Unfortunately, at this critical juncture, he was unable to procure the necessary supercomputer time to continue his work (Wilson 2002).
Others took up his methods and his computer-assisted approach. While other teams of mathematicians were racing to complete proofs, Kenneth Appel and Wolfgang Haken at the University of Illinois announced, on June 21, 1976, that they had proven the theorem. They were assisted in some algorithmic work by John A. Koch (Wilson 2002).
If the four-color conjecture were false, there would be at least one map with the smallest possible number of regions that requires five colors. The proof showed that such a minimal counterexample cannot exist, through the use of two technical concepts (Wilson 2002; Appel & Haken 1989; Thomas 1998, pp. 852–853):
- An unavoidable set contains regions such that every map must have at least one region from this collection.
- A reducible configuration is an arrangement of countries that cannot occur in a minimal counterexample. If a map contains a reducible configuration, then the map can be reduced to a smaller map. This smaller map has the condition that if it can be colored with four colors, then the original map can also. This implies that if the original map cannot be colored with four colors the smaller map can't either and so the original map is not minimal.
Using mathematical rules and procedures based on properties of reducible configurations, Appel and Haken found an unavoidable set of reducible configurations, thus proving that a minimal counterexample to the four-color conjecture could not exist. Their proof reduced the infinitude of possible maps to 1,936 reducible configurations (later reduced to 1,476) which had to be checked one by one by computer and took over a thousand hours. This reducibility part of the work was independently double checked with different programs and computers. However, the unavoidability part of the proof was verified in over 400 pages of microfiche, which had to be checked by hand (Appel & Haken 1989).
Appel and Haken's announcement was widely reported by the news media around the world, and the math department at the University of Illinois used a postmark stating "Four colors suffice." At the same time the unusual nature of the proof—it was the first major theorem to be proven with extensive computer assistance—and the complexity of the human-verifiable portion, aroused considerable controversy (Wilson 2002).
In the early 1980s, rumors spread of a flaw in the Appel-Haken proof. Ulrich Schmidt at RWTH Aachen examined Appel and Haken's proof for his master's thesis (Wilson 2002, 225). He had checked about 40% of the unavoidability portion and found a significant error in the discharging procedure (Appel & Haken 1989). In 1986, Appel and Haken were asked by the editor of Mathematical Intelligencer to write an article addressing the rumors of flaws in their proof. They responded that the rumors were due to a "misinterpretation of [Schmidt's] results" and obliged with a detailed article (Wilson 2002, 225–226). Their magnum opus, Every Planar Map is Four-Colorable, a book claiming a complete and detailed proof (with a microfiche supplement of over 400 pages), appeared in 1989 and explained Schmidt's discovery and several further errors found by others (Appel & Haken 1989).
Simplification and verification
Since the proving of the theorem, efficient algorithms have been found for 4-coloring maps requiring only O(n2) time, where n is the number of vertices. In 1996, Neil Robertson, Daniel P. Sanders, Paul Seymour, and Robin Thomas created a quadratic time algorithm, improving on a quartic algorithm based on Appel and Haken’s proof (Thomas 1995; Robertson et al. 1996). This new proof is similar to Appel and Haken's but more efficient because it reduced the complexity of the problem and required checking only 633 reducible configurations. Both the unavoidability and reducibility parts of this new proof must be executed by computer and are impractical to check by hand (Thomas 1998, pp. 852–853). In 2001, the same authors announced an alternative proof, by proving the snark theorem (Thomas; Pegg et al. 2002).
In 2005, Benjamin Werner and Georges Gonthier formalized a proof of the theorem inside the Coq proof assistant. This removed the need to trust the various computer programs used to verify particular cases; it is only necessary to trust the Coq kernel (Gonthier 2008).
Summary of proof ideas
The following discussion is a summary based on the introduction to Appel and Haken's book Every Planar Map is Four Colorable (Appel & Haken 1989). Although flawed, Kempe's original purported proof of the four color theorem provided some of the basic tools later used to prove it. The explanation here is reworded in terms of the modern graph theory formulation above.
Kempe's argument goes as follows. First, if planar regions separated by the graph are not triangulated, i.e. do not have exactly three edges in their boundaries, we can add edges without introducing new vertices in order to make every region triangular, including the unbounded outer region. If this triangulated graph is colorable using four colors or fewer, so is the original graph since the same coloring is valid if edges are removed. So it suffices to prove the four color theorem for triangulated graphs to prove it for all planar graphs, and without loss of generality we assume the graph is triangulated.
Suppose v, e, and f are the number of vertices, edges, and regions. Since each region is triangular and each edge is shared by two regions, we have that 2e = 3f. This together with Euler's formula v − e + f = 2 can be used to show that 6v − 2e = 12. Now, the degree of a vertex is the number of edges abutting it. If vn is the number of vertices of degree n and D is the maximum degree of any vertex,
But since 12 > 0 and 6 − i ≤ 0 for all i ≥ 6, this demonstrates that there is at least one vertex of degree 5 or less.
If there is a graph requiring 5 colors, then there is a minimal such graph, where removing any vertex makes it four-colorable. Call this graph G. G cannot have a vertex of degree 3 or less, because if d(v) ≤ 3, we can remove v from G, four-color the smaller graph, then add back v and extend the four-coloring to it by choosing a color different from its neighbors.
Kempe also showed correctly that G can have no vertex of degree 4. As before we remove the vertex v and four-color the remaining vertices. If all four neighbors of v are different colors, say red, green, blue, and yellow in clockwise order, we look for an alternating path of vertices colored red and blue joining the red and blue neighbors. Such a path is called a Kempe chain. There may be a Kempe chain joining the red and blue neighbors, and there may be a Kempe chain joining the green and yellow neighbors, but not both, since these two paths would necessarily intersect, and the vertex where they intersect cannot be colored. Suppose it is the red and blue neighbors that are not chained together. Explore all vertices attached to the red neighbor by red-blue alternating paths, and then reverse the colors red and blue on all these vertices. The result is still a valid four-coloring, and v can now be added back and colored red.
This leaves only the case where G has a vertex of degree 5; but Kempe's argument was flawed for this case. Heawood noticed Kempe's mistake and also observed that if one was satisfied with proving only five colors are needed, one could run through the above argument (changing only that the minimal counterexample requires 6 colors) and use Kempe chains in the degree 5 situation to prove the five color theorem.
In any case, to deal with this degree 5 vertex case requires a more complicated notion than removing a vertex. Rather the form of the argument is generalized to considering configurations, which are connected subgraphs of G with the degree of each vertex (in G) specified. For example, the case described in degree 4 vertex situation is the configuration consisting of a single vertex labelled as having degree 4 in G. As above, it suffices to demonstrate that if the configuration is removed and the remaining graph four-colored, then the coloring can be modified in such a way that when the configuration is re-added, the four-coloring can be extended to it as well. A configuration for which this is possible is called a reducible configuration. If at least one of a set of configurations must occur somewhere in G, that set is called unavoidable. The argument above began by giving an unavoidable set of five configurations (a single vertex with degree 1, a single vertex with degree 2, ..., a single vertex with degree 5) and then proceeded to show that the first 4 are reducible; to exhibit an unavoidable set of configurations where every configuration in the set is reducible would prove the theorem.
Because G is triangular, the degree of each vertex in a configuration is known, and all edges internal to the configuration are known, the number of vertices in G adjacent to a given configuration is fixed, and they are joined in a cycle. These vertices form the ring of the configuration; a configuration with k vertices in its ring is a k-ring configuration, and the configuration together with its ring is called the ringed configuration. As in the simple cases above, one may enumerate all distinct four-colorings of the ring; any coloring that can be extended without modification to a coloring of the configuration is called initially good. For example, the single-vertex configuration above with 3 or less neighbors were initially good. In general, the surrounding graph must be systematically recolored to turn the ring's coloring into a good one, as was done in the case above where there were 4 neighbors; for a general configuration with a larger ring, this requires more complex techniques. Because of the large number of distinct four-colorings of the ring, this is the primary step requiring computer assistance.
Finally, it remains to identify an unavoidable set of configurations amenable to reduction by this procedure. The primary method used to discover such a set is the method of discharging. The intuitive idea underlying discharging is to consider the planar graph as an electrical network. Initially positive and negative "electrical charge" is distributed amongst the vertices so that the total is positive.
Recall the formula above:
Each vertex is assigned an initial charge of 6-deg(v). Then one "flows" the charge by systematically redistributing the charge from a vertex to its neighboring vertices according to a set of rules, the discharging procedure. Since charge is preserved, some vertices still have positive charge. The rules restrict the possibilities for configurations of positively-charged vertices, so enumerating all such possible configurations gives an unavoidable set.
As long as some member of the unavoidable set is not reducible, the discharging procedure is modified to eliminate it (while introducing other configurations). Appel and Haken's final discharging procedure was extremely complex and, together with a description of the resulting unavoidable configuration set, filled a 400-page volume, but the configurations it generated could be checked mechanically to be reducible. Verifying the volume describing the unavoidable configuration set itself was done by peer review over a period of several years.
A technical detail not discussed here but required to complete the proof is immersion reducibility.
The four color theorem has been notorious for attracting a large number of false proofs and disproofs in its long history. At first, The New York Times refused as a matter of policy to report on the Appel–Haken proof, fearing that the proof would be shown false like the ones before it (Wilson 2002). Some alleged proofs, like Kempe's and Tait's mentioned above, stood under public scrutiny for over a decade before they were exposed. But many more, authored by amateurs, were never published at all.
Generally, the simplest, though invalid, counterexamples attempt to create one region which touches all other regions. This forces the remaining regions to be colored with only three colors. Because the four color theorem is true, this is always possible; however, because the person drawing the map is focused on the one large region, they fail to notice that the remaining regions can in fact be colored with three colors.
This trick can be generalized: there are many maps where if the colors of some regions are selected beforehand, it becomes impossible to color the remaining regions without exceeding four colors. A casual verifier of the counterexample may not think to change the colors of these regions, so that the counterexample will appear as though it is valid.
Perhaps one effect underlying this common misconception is the fact that the color restriction is not transitive: a region only has to be colored differently from regions it touches directly, not regions touching regions that it touches. If this were the restriction, planar graphs would require arbitrarily large numbers of colors.
Other false disproofs violate the assumptions of the theorem in unexpected ways, such as using a region that consists of multiple disconnected parts, or disallowing regions of the same color from touching at a point.
The four-color theorem applies not only to finite planar graphs, but also to infinite graphs that can be drawn without crossings in the plane, and even more generally to infinite graphs (possibly with an uncountable number of vertices) for which every finite subgraph is planar. To prove this, one can combine a proof of the theorem for finite planar graphs with the De Bruijn–Erdős theorem stating that, if every finite subgraph of an infinite graph is k-colorable, then the whole graph is also k-colorable Nash-Williams (1967). This can also be seen as an immediate consequence of Kurt Gödel's compactness theorem for First-Order Logic, simply by expressing the colorability of an infinite graph with a set of logical formulae.
One can also consider the coloring problem on surfaces other than the plane (Weisstein). The problem on the sphere or cylinder is equivalent to that on the plane. For closed (orientable or non-orientable) surfaces with positive genus, the maximum number p of colors needed depends on the surface's Euler characteristic χ according to the formula
where the outermost brackets denote the floor function.
Alternatively, for an orientable surface the formula can be given in terms of the genus of a surface, g:
This formula, the Heawood conjecture, was conjectured by P.J. Heawood in 1890 and proven by Gerhard Ringel and J. T. W. Youngs in 1968. The only exception to the formula is the Klein bottle, which has Euler characteristic 0 (hence the formula gives p = 7) and requires 6 colors, as shown by P. Franklin in 1934 (Weisstein).
For example, the torus has Euler characteristic χ = 0 (and genus g = 1) and thus p = 7, so no more than 7 colors are required to color any map on a torus. The Szilassi polyhedron is an example that requires seven colors.
A Möbius strip requires six colors (Weisstein) as do 1-planar graphs (graphs drawn with at most one simple crossing per edge) (Borodin 1984). If both the vertices and the faces of a planar graph are colored, in such a way that no two adjacent vertices, faces, or vertex-face pair have the same color, then again at most six colors are needed (Borodin 1984).
There is no obvious extension of the coloring result to three-dimensional solid regions. By using a set of n flexible rods, one can arrange that every rod touches every other rod. The set would then require n colors, or n+1 if you consider the empty space that also touches every rod. The number n can be taken to be any integer, as large as desired. Such examples were known to Fredrick Guthrie in 1880 (Wilson 2002). Even for axis-parallel cuboids (considered to be adjacent when two cuboids share a two-dimensional boundary area) an unbounded number of colors may be necessary (Reed & Allwright 2008; Magnant & Martin (2011)).
- Graph coloring
- the problem of finding optimal colorings of graphs that are not necessarily planar.
- Grötzsch's theorem
- triangle-free planar graphs are 3-colorable.
- Hadwiger–Nelson problem
- how many colors are needed to color the plane so that no two points at unit distance apart have the same color?
- List of sets of four countries that border one another
- Contemporary examples of national maps requiring four colors
- Apollonian network
- The planar graphs that require four colors and have exactly one four-coloring
- Georges Gonthier (December, 2008). "Formal Proof---The Four-Color Theorem". Notices of the AMS 55 (11): 1382–1393.From this paper: Definitions: A planar map is a set of pairwise disjoint subsets of the plane, called regions. A simple map is one whose regions are connected open sets. Two regions of a map are adjacent if their respective closures have a common point that is not a corner of the map. A point is a corner of a map if and only if it belongs to the closures of at least three regions. Theorem: The regions of any simple planar map can be colored with only four colors, in such a way that any two adjacent regions have different colors.
- Hud Hudson (May, 2003). "Four Colors Do Not Suffice". The American Mathematical Monthly 110 (5): 417–423. JSTOR 3647828.
- Donald MacKenzie, Mechanizing Proof: Computing, Risk, and Trust (MIT Press, 2004) p103
- F. G. (June 10, 1854), "Tinting Maps", The Athenaeum: 726.
- Brendan D. McKay (2012). "A note on the history of the four-colour conjecture". arXiv:1201.2852.
- De Morgan, Augustus (April 14, 1860), "Review of Whewell's "The Philosophy of Discovery"", The Athenaeum: 501–503. As cited by Wilson, John (1976), "New light on the origin of the four-color conjecture", Historia Mathematica 3: 329–330, doi:10.1016/0315-0860(76)90106-3, MR 0504961.
- Tait, P. G. (1880), "Remarks on the colourings of maps", Proc. R. Soc. Edinburgh 10: 729
- Gary Chartrand and Linda Lesniak, Graphs & Digraphs (CRC Press, 2005) p221
- Allaire, F. (1997), "Another proof of the four colour theorem—Part I", Proceedings, 7th Manitoba Conference on Numerical Mathematics and Computing, Congr. Numer. 20: 3–72
- Appel, Kenneth; Haken, Wolfgang (1977), "Every Planar Map is Four Colorable Part I. Discharging", Illinois Journal of Mathematics 21: 429–490
- Appel, Kenneth; Haken, Wolfgang; Koch, John (1977), "Every Planar Map is Four Colorable Part II. Reducibility", Illinois Journal of Mathematics 21: 491–567
- Appel, Kenneth; Haken, Wolfgang (October 1977), "Solution of the Four Color Map Problem", Scientific American 237 (4): 108–121, doi:10.1038/scientificamerican1077-108
- Appel, Kenneth; Haken, Wolfgang (1989), Every Planar Map is Four-Colorable, Providence, RI: American Mathematical Society, ISBN 0-8218-5103-9
- Bernhart, Frank R. (1977), "A digest of the four color theorem.", Journal of Graph Theory 1: 207–225, doi:10.1002/jgt.3190010305
- Borodin, O. V. (1984), "Solution of the Ringel problem on vertex-face coloring of planar graphs and coloring of 1-planar graphs", Metody Diskretnogo Analiza (41): 12–26, 108, MR 832128.
- Cayley, Arthur (1879), "On the colourings of maps", Proc. Royal Geographical Society (Blackwell Publishing) 1 (4): 259–261, doi:10.2307/1799998, JSTOR 1799998
- Fritsch, Rudolf; Fritsch, Gerda (1998), The Four Color Theorem: History, Topological Foundations and Idea of Proof, New York: Springer, ISBN 978-0-387-98497-1
- Gonthier, Georges (2008), "Formal Proof—The Four-Color Theorem", Notices of the American Mathematical Society 55 (11): 1382–1393
- Gonthier, Georges (2005), A computer-checked proof of the four colour theorem, unpublished
- Hadwiger, Hugo (1943), "Über eine Klassifikation der Streckenkomplexe", Vierteljschr. Naturforsch. Ges. Zürich 88: 133–143
- Heawood, P. J. (1890), "Map-Colour Theorem", Quarterly Journal of Mathematics, Oxford 24: 332–338
- Magnant, C.; Martin, D. M. (2011), "Coloring rectangular blocks in 3-space", Discussiones Mathematicae Graph Theory 31 (1): 161–170
- Nash-Williams, C. St. J. A. (1967), "Infinite graphs—a survey", J. Combinatorial Theory 3: 286–301, MR 0214501.
- O'Connor; Robertson (1996), The Four Colour Theorem, MacTutor archive
- Pegg, A.; Melendez, J.; Berenguer, R.; Sendra, J. R.; Hernandez; Del Pino, J. (2002), "Book Review: The Colossal Book of Mathematics", Notices of the American Mathematical Society 49 (9): 1084–1086, Bibcode:2002ITED...49.1084A, doi:10.1109/TED.2002.1003756
- Reed, Bruce; Allwright, David (2008), "Painting the office", Mathematics-in-Industry Case Studies 1: 1–8
- Ringel, G.; Youngs, J.W.T. (1968), "Solution of the Heawood Map-Coloring Problem", Proc. Nat. Acad. Sci. USA 60 (2): 438–445, Bibcode:1968PNAS...60..438R, doi:10.1073/pnas.60.2.438, PMC 225066, PMID 16591648
- Robertson, Neil; Sanders, Daniel P.; Seymour, Paul; Thomas, Robin (1996), "Efficiently four-coloring planar graphs", Efficiently four-coloring planar graphs, STOC'96: Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, ACM Press, pp. 571–575, doi:10.1145/237814.238005
- Robertson, Neil; Sanders, Daniel P.; Seymour, Paul; Thomas, Robin (1997), "The Four-Colour Theorem", J. Combin. Theory Ser. B 70 (1): 2–44, doi:10.1006/jctb.1997.1750
- Saaty, Thomas; Kainen, Paul (1986), "The Four Color Problem: Assaults and Conquest", Science (New York: Dover Publications) 202 (4366): 424, Bibcode:1978Sci...202..424S, doi:10.1126/science.202.4366.424, ISBN 0-486-65092-8
- Swart, ER (1980), "The philosophical implications of the four-color problem", American Mathematical Monthly (Mathematical Association of America) 87 (9): 697–702, doi:10.2307/2321855, JSTOR 2321855
- Thomas, Robin (1998), "An Update on the Four-Color Theorem", Notices of the American Mathematical Society 45 (7): 848–859
- Thomas, Robin (1995), The Four Color Theorem
- Thomas, Robin, Recent Excluded Minor Theorems for Graphs, p. 14
- Wilson, Robin (2002), Four Colors Suffice, London: Penguin Books, ISBN 0-691-11533-8
|Wikimedia Commons has media related to: Four color theorem|
- Hazewinkel, Michiel, ed. (2001), "Four-colour problem", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Weisstein, Eric W., "Blanuša snarks", MathWorld.
- Weisstein, Eric W., "Map coloring", MathWorld. | http://readtiger.com/wkp/en/Four_color_theorem | 13 |
57 | Supermassive black hole
A supermassive black hole (SMBH) is the largest type of black hole, on the order of hundreds of thousands to billions of solar masses. Most—and possibly all—galaxies are inferred to contain a supermassive black hole at their centers. In the case of the Milky Way, the SMBH is believed to correspond with the location of Sagittarius A*.
Supermassive black holes have properties which distinguish them from lower-mass classifications. First, the average density of a supermassive black hole (defined as the mass of the black hole divided by the volume within its Schwarzschild radius) can be less than the density of water in the case of some supermassive black holes. This is because the Schwarzschild radius is directly proportional to mass, while density is inversely proportional to the volume. Since the volume of a spherical object (such as the event horizon of a non-rotating black hole) is directly proportional to the cube of the radius, the density of a black hole is inversely proportional to the square of the mass, and thus higher mass black holes have lower average density. Also, the tidal forces in the vicinity of the event horizon are significantly weaker. Since the central singularity is so far away from the horizon, a hypothetical astronaut traveling towards the black hole center would not experience significant tidal force until very deep into the black hole.
History of research
Donald Lynden-Bell and Martin Rees hypothesized in 1971 that the center of the Milky Way galaxy would contain a supermassive black hole. Thus, the first thoughts about supermassive black holes related to the center of the Milky Way. Sagittarius A* was discovered and named on February 13 and 15, 1974, by astronomers Bruce Balick and Robert Brown using the baseline interferometer of the National Radio Astronomy Observatory. They discovered a radio source that emits synchrotron radiation; also it was found to be dense and immobile because of its gravitation. Therefore, the first discovered supermassive black hole exists in the center of the Milky Way.
The origin of supermassive black holes remains an open field of research. Astrophysicists agree that once a black hole is in place in the center of a galaxy, it can grow by accretion of matter and by merging with other black holes. There are, however, several hypotheses for the formation mechanisms and initial masses of the progenitors, or "seeds", of supermassive black holes. The most obvious hypothesis is that the seeds are black holes of tens or perhaps hundreds of solar masses that are left behind by the explosions of massive stars and grow by accretion of matter. Another model involves a large gas cloud in the period before the first stars formed collapsing into a “quasi-star” and then a black hole of initially only around ~20 solar masses, and then rapidly accreting to become relatively quickly an intermediate-mass black hole, and possibly a SMBH if the accretion-rate is not quenched at higher masses. The initial “quasi-star” would become unstable to radial perturbations because of electron-positron pair production in its core, and may collapse directly into a black hole without a supernova explosion, which would eject most of its mass and prevent it from leaving a black hole as a remnant. Yet another model involves a dense stellar cluster undergoing core-collapse as the negative heat capacity of the system drives the velocity dispersion in the core to relativistic speeds. Finally, primordial black holes may have been produced directly from external pressure in the first moments after the Big Bang. Formation of black holes from the deaths of the first stars has been extensively studied and corroborated by observations. The other models for black hole formation listed above are theoretical.
The difficulty in forming a supermassive black hole resides in the need for enough matter to be in a small enough volume. This matter needs to have very little angular momentum in order for this to happen. Normally, the process of accretion involves transporting a large initial endowment of angular momentum outwards, and this appears to be the limiting factor in black hole growth. This is a major component of the theory of accretion disks. Gas accretion is the most efficient, and also the most conspicuous, way in which black holes grow. The majority of the mass growth of supermassive black holes is thought to occur through episodes of rapid gas accretion, which are observable as active galactic nuclei or quasars. Observations reveal that quasars were much more frequent when the Universe was younger, indicating that supermassive black holes formed and grew early. A major constraining factor for theories of supermassive black hole formation is the observation of distant luminous quasars, which indicate that supermassive black holes of billions of solar masses had already formed when the Universe was less than one billion years old. This suggests that supermassive black holes arose very early in the Universe, inside the first massive galaxies.
Currently, there appears to be a gap in the observed mass distribution of black holes. There are stellar-mass black holes, generated from collapsing stars, which range up to perhaps 33 solar masses. The minimal supermassive black hole is in the range of a hundred thousand solar masses. Between these regimes there appears to be a dearth of intermediate-mass black holes. Such a gap would suggest qualitatively different formation processes. However, some models suggest that ultraluminous X-ray sources (ULXs) may be black holes from this missing group.
Doppler measurements
Direct Doppler measures of water masers surrounding the nuclei of nearby galaxies have revealed a very fast Keplerian motion, only possible with a high concentration of matter in the center. Currently, the only known objects that can pack enough matter in such a small space are black holes, or things that will evolve into black holes within astrophysically short timescales. For active galaxies farther away, the width of broad spectral lines can be used to probe the gas orbiting near the event horizon. The technique of reverberation mapping uses variability of these lines to measure the mass and perhaps the spin of the black hole that powers the active galaxy's "engine".
Milky Way galactic center black hole
- The star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter (closest distance) of 17 light hours (1.8×1013 m or 120 AU) from the center of the central object.
- From the motion of star S2, the object's mass can be estimated as 4.1 million solar masses, or about 8.2 × 1036 kg.
- The radius of the central object must be significantly less than 17 light hours, because otherwise, S2 would either collide with it or be ripped apart by tidal forces. In fact, recent observations indicate that the radius is no more than 6.25 light-hours, about the diameter of Uranus' orbit.
- Only a black hole is dense enough to contain 4.1 million solar masses in this volume of space.
The Max Planck Institute for Extraterrestrial Physics and UCLA Galactic Center Group have provided the strongest evidence to date that Sagittarius A* is the site of a supermassive black hole, based on data from the ESO and the Keck telescope.
Supermassive black holes outside the Milky Way
It is now widely accepted that the center of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M-sigma relation, strongly suggests a connection between the formation of the black hole and the galaxy itself.
The nearby Andromeda Galaxy, 2.5 million light-years away, contains a (1.1–2.3) × 108 (110-230 million) solar mass central black hole, significantly larger than the Milky Way's. The largest supermassive black hole in the Milky Way's neighborhood appears to be that of M87, weighing in at (6.4 ± 0.5) × 109 (~6.4 billion) solar masses at a distance of 53.5 million light years. On 5 December 2011 astronomers discovered the largest super massive black hole yet found to be that of NGC 4889, weighing in at 21 billion solar masses at a distance of 336 million light-years away in the Coma constellation.
Some galaxies, such as Galaxy 0402+379, appear to have two supermassive black holes at their centers, forming a binary system. If they collided, the event would create strong gravitational waves. Binary supermassive black holes are believed to be a common consequence of galactic mergers. The binary pair in OJ 287, 3.5 billion light years away, contains the previous most massive black hole known (until the December 2011 discovery ), with a mass estimated at 18 billion solar masses. A supermassive black hole was recently discovered in the dwarf galaxy Henize 2-10, which has no bulge. The precise implications for this discovery on black hole formation are unknown, but may indicate that black holes formed before bulges.
On March 28, 2011, a supermassive black hole was seen tearing a mid-size star apart. That is, according to astronomers, the only likely explanation of the observations that day of sudden X-ray radiation and the follow-up broad-band observations. The source was previously an inactive galactic nucleus, and from study of the outburst the galactic nucleus is estimated to be a SMBH with mass of the order of a million solar masses. This rare event is assumed to be a relativistic outflow (material being emitted in a jet at a significant fraction of the speed of light) from a star tidally disrupted by the SMBH. A significant fraction of a solar mass of material is expected to have accreted onto the SMBH. Subsequent long-term observation will allow this assumption to be confirmed if the emission from the jet decays at the expected rate for mass accretion onto a SMBH.
As reported in Nature of 28 November 2012, astronomers have used the Hobby-Eberly Telescope to measure the mass of an extraordinarily large black hole (with mass approximates 17 billion Suns), possibly one of the largest black holes found so far. It has been found in the compact, lenticular galaxy NGC 1277, lies 220 million light-years away in the constellation Perseus. The black hole has approximately 59 percent of the mass of the bulge of this spiral galaxy (14 percent of the total stellar mass of the galaxy).
Supermassive black holes in fiction
See also
- Active galactic nucleus
- Black hole
- Central massive object
- Galactic center
- General relativity
- Hypercompact stellar system
- M-sigma relation
- Neutron star
- Sagittarius A*
- Chandra :: Photo Album :: RX J1242-11 :: 18 Feb 04
- Antonucci, R. (1993). "Unified Models for Active Galactic Nuclei and Quasars". Annual Reviews in Astronomy and Astrophysics 31 (1): 473–521. Bibcode:1993ARA&A..31..473A. doi:10.1146/annurev.aa.31.090193.002353.
- Urry, C.; Padovani, P. (1995). "Unified Schemes for Radio-Loud Active Galactic Nuclei". Publications of the Astronomical Society of the Pacific 107: 803–845. arXiv:astro-ph/9506063. Bibcode:1995PASP..107..803U. doi:10.1086/133630.
- Schödel, R.; et al. (2002). "A star in a 15.2-year orbit around the supermassive black hole at the centre of the Milky Way". Nature 419 (6908): 694–696. arXiv:astro-ph/0210426. Bibcode:2002Natur.419..694S. doi:10.1038/nature01121. PMID 12384690.
- Celotti, A.; Miller, J.C.; Sciama, D.W. (1999). "Astrophysical evidence for the existence of black holes". Class. Quant. Grav. 16 (12A): A3–A21. arXiv:astro-ph/9912186. doi:10.1088/0264-9381/16/12A/301.
- Melia 2007, p. 2
- Begelman, M. C.; et al. (Jun 2006). "Formation of supermassive black holes by direct collapse in pre-galactic haloed". Monthly Notices of the Royal Astronomical Society 370 (1): 289–298. arXiv:astro-ph/0602363. Bibcode:2006MNRAS.370..289B. doi:10.1111/j.1365-2966.2006.10467.x.
- Spitzer, L. (1987). Dynamical Evolution of Globular Clusters. Princeton University Press. ISBN 0-691-08309-6.
- "Biggest Black Hole Blast Discovered". ESO Press Release. Retrieved 28 November 2012.
- Winter, L.M.; et al. (Oct 2006). "XMM-Newton Archival Study of the ULX Population in Nearby Galaxies". Astrophysical Journal 649 (2): 730–752. arXiv:astro-ph/0512480. Bibcode:2006ApJ...649..730W. doi:10.1086/506579.
- "SINFONI in the Galactic Center: Young Stars and Infrared Flares in the Central Light-Month" by Eisenhauer et al, The Astrophysical Journal, 628:246-259, 2005
- Henderson, Mark (December 9, 2008). "Astronomers confirm black hole at the heart of the Milky Way". London: Times Online. Retrieved 2009-05-17.
- Schödel, R.; et. al. (17 October 2002). "A star in a 15.2-year orbit around the supermassive black hole at the centre of the Milky Way". Nature 419 (6908): 694–696. arXiv:astro-ph/0210426. Bibcode:2002Natur.419..694S. doi:10.1038/nature01121. PMID 12384690.
- Ghez, A. M.; et al. (December 2008). "Measuring Distance and Properties of the Milky Way's Central Supermassive Black Hole with Stellar Orbits". Astrophysical Journal 689 (2): 1044–1062. arXiv:astro-ph/0808.2870. Bibcode:2008ApJ...689.1044G. doi:10.1086/592738.
- Milky Way's Central Monster Measured
- Ghez, A. M.; Salim, S.; Hornstein, S. D.; Tanner, A.; Lu, J. R.; Morris, M.; Becklin, E. E.; Duchêne, G. (May 2005). "Stellar Orbits around the Galactic Center Black Hole". The Astrophysical Journal 620 (2): 744–757. arXiv:astro-ph/0306130. Bibcode:2005ApJ...620..744G. doi:10.1086/427175.
- UCLA Galactic Center Group
- ESO - 2002
- King, Andrew (2003-09-15). "Black Holes, Galaxy Formation, and the MBH-σ Relation". The Astrophysical Journal Letters 596: L27–L29. arXiv:astro-ph/0308342. Bibcode:2003ApJ...596L..27K. doi:10.1086/379143.
- Richstone, D. et al. (January 13, 1997). "Massive Black Holes Dwell in Most Galaxies, According to Hubble Census". 189th Meeting of the American Astronomical Society. Retrieved 2009-05-17.
- Merritt, D.; Ferrarese, Laura (2001-01-15). "The MBH-σ Relation for Supermassive Black Holes". The Astrophysical Journal (The American Astronomical Society.) 547 (1): 547:140–145. arXiv:astro-ph/0008310. Bibcode:2001ApJ...547..140M. doi:10.1086/318372.
- Robert Roy Britt (2003-07-29). "The New History of Black Holes: 'Co-evolution' Dramatically Alters Dark Reputation".
- "Astronomers crack cosmic chicken-or-egg dilemma". 2003-07-22.
- Bender, Ralf; et al. (2005-09-20). "HST STIS Spectroscopy of the Triple Nucleus of M31: Two Nested Disks in Keplerian Rotation around a Supermassive Black Hole". The Astrophysical Journal 631 (1): 280–300. arXiv:astro-ph/0509839. Bibcode:2005ApJ...631..280B. doi:10.1086/432434.
- Gebhardt, Karl; Thomas, Jens (August 2009). "The Black Hole Mass, Stellar Mass-to-Light Ratio, and Dark Halo in M87". The Astrophysical Journal 700 (2): 1690–1701. arXiv:0906.1492. Bibcode:2009ApJ...700.1690G. doi:10.1088/0004-637X/700/2/1690.
- Macchetto, F.; Marconi, A.; Axon, D. J.; Capetti, A.; Sparks, W.; Crane, P. (November 1997). "The Supermassive Black Hole of M87 and the Kinematics of Its Associated Gaseous Disk". Astrophysical Journal 489 (2): 579. arXiv:astro-ph/9706252. Bibcode:1997ApJ...489..579M. doi:10.1086/304823.
- Overbye, Dennis (2011-12-05). "Astronomers Find Biggest Black Holes Yet". The New York Times.
- D. Merritt and M. Milosavljevic (2005). "Massive Black Hole Binary Evolution." http://relativity.livingreviews.org/Articles/lrr-2005-8/
- Two most massive black holes as of December 2011
- Shiga, David (10 January 2008). "Biggest black hole in the cosmos discovered". NewScientist.com news service.
- Kaufman, Rachel (10 January 2011). "Huge Black Hole Found in Dwarf Galaxy". National Geographic. Retrieved 1 June 2011.
- "Astronomers catch first glimpse of star being consumed by black hole". The Sydney Morning Herald. 2011-08-26.
- Burrows, D. N.; Kennea, J. A.; Ghisellini, G.; Mangano, V.; et al (Aug 2011). "Relativistic jet activity from the tidal disruption of a star by a massive black hole". Nature 476 (7361): 421–424. arXiv:1104.4787. Bibcode:2011Natur.476..421B. doi:10.1038/nature10374.
- Zauderer, B. A.; Berger, E.; Soderberg, A. M.; Loeb, A.; et al (Aug 2011). "Birth of a relativistic outflow in the unusual γ-ray transient Swift J164449.3+573451". Nature 476 (7361): 425–428. arXiv:1106.3568. Bibcode:2011Natur.476..425Z. doi:10.1038/nature10366.
- Ron Cowen, Small galaxy harbours super-hefty black hole, Nature News, 28 November 2012
- Remco C. E. van den Bosch, Karl Gebhardt, Kayhan Gültekin, Glenn van de Ven, Arjen van der Wel, Jonelle L. Walsh, An over-massive black hole in the compact lenticular galaxy NGC 1277, Nature 491, pp. 729–731 (29 November 2012) doi:10.1038/nature11592, published online 28 November 2012
Further reading
- Fulvio Melia (2003). The Edge of Infinity. Supermassive Black Holes in the Universe. Cambridge University Press. ISBN 978-0-521-81405-8.
- Laura Ferrarese and David Merritt (2002). "Supermassive Black Holes". Physics World 15 (1): 41–46. arXiv:astro-ph/0206222. Bibcode:2002astro.ph..6222F.
- Fulvio Melia (2007). The Galactic Supermassive Black Hole. Princeton University Press. ISBN 978-0-691-13129-0.
- Julian Krolik (1999). Active Galactic Nuclei. Princeton University Press. ISBN 0-691-01151-6.
- David Merritt (2013). Dynamics and Evolution of Galactic Nuclei. Princeton University Press. ISBN 9781400846122.
- Black Holes: Gravity's Relentless Pull Award-winning interactive multimedia Web site about the physics and astronomy of black holes from the Space Telescope Science Institute
- Images of supermassive black holes
- NASA images of supermassive black holes
- The black hole at the heart of the Milky Way
- ESO video clip of stars orbiting a galactic black hole
- Star Orbiting Massive Milky Way Centre Approaches to within 17 Light-Hours ESO, October 21, 2002
- Images, Animations, and New Results from the UCLA Galactic Center Group
- Washington Post article on Supermassive black holes
- A simulation of the stars orbiting the Milky Way's central massive black hole | http://en.wikipedia.org/wiki/Supermassive_black_holes | 13 |
54 | Modern programming languages and multi-core CPUs offer very efficient multi-threading. Using multithreading can improve performance and responsiveness of an application, but working with threads is quite difficult as they can make things much more complicated. One way to organise threads so that they co-operate without tripping over each other is to use a messaging mechanism to communicate between them.
Why use messages?
Messages provide a good model for communication between independent processes because we humans use them all the time. We naturally coordinate and co-operate by sending messages to each other. The messages can be synchronous (like a conversation) or asynchronous (like an email or a letter) or broadcast (like a radio programme). The messaging paradigm is easy for us to imagine and understand. In particular, it provides a natural way for us to think about how things interact. We can easily imagine a process that is responsible for a particular action, which is started when it receives a message. The message may contain information needed for the action. When the action is complete, the process can report the result by sending another message. We can imagine a simple, independent process, all it has to do is wait for the arrival of a message, carry out a task, and send a message saying it has finished. What could be simpler?
What is a message bus?
A message bus provides a means of message transmission in which the sender and receiver are not directly connected. For example, Ethernet is a message bus – all senders put all their messages on the bus and, locally at least, all receivers receive every message.
When messages are sent on a bus, there needs to be a way for the receiver(s) to select the messages they need to process. There are various ways to do this, but for the message bus implemented in this article we allow a sender to label a message with a sender role, a subject and a message type. These may appear to be quite arbitrary properties, but they fit in with the way in which the bus is used and provide a straightforward way for receivers to filter messages so that they process only those messages that are relevant.
This form of messaging is usually referred to as a Publish and Subscribe model.
How our bus will work
These are the essential characteristics of our message bus:
- The message bus operates within a single application, to send messages between independent worker threads.
- Any worker thread in the application can access the message bus.
- Any worker thread may send and receive messages using the bus.
- Messages are broadcast , so every receiver that is listening will get every message.
- The bus does not store messages so a receiver will not get any messages that were sent before it connects to the bus.
- The thread that sends a message is separated from the thread(s) that receive it, so sending and receiving are always asynchronous.
- A receiver can set a filter to select only relevant messages for delivery – subscribing to a subset of the messages sent on the bus.
- Worker processes that send and receive messages are not held up by other worker threads when they do so. We want our senders and receivers to be working at their tasks without having to wait for messages to be delivered and processed by other threads.
Classes of the message bus
These are the classes which make up the bus:
- The base class of the bus and all the other classes. This class is never instantiated directly, but holds class (Shared) variables and methods that provide some core functions of the bus.
- A component that provides the mechanism for delivering messages to receivers.
- A component that provides and controls a thread for use within the sender and receiver classes.
- The class that is used by senders to put messages into the Message bus. Each worker process that sends messages uses a
- The class that is used by worker processes to subscribe and take delivery of messages from the bus.
- Class used to apply subscription filters to incoming messages within a
- Objects of this class are sent and received. In our system the message content is a string, but the class could be extended through inheritance to provide richer content.
cBus and cBusLink – the core of the message bus
cBus is the base class for all the other classes in the implementation.
cBus is a virtual class – it is never itself instantiated. It contains only one class member,
oBusLink, a shared instance of
oBusLink is protected, which means it is accessible only to derived (child) classes of
cBusLink, which are central to the whole message bus, are very simple (see Listing 1).
Listing 1 – cBus and cBusLink classes
Public Class cBus '// /////////////////////////////////////// '// The BusLink class is used only as a means of '// propagating publication of a message from '// senders to receivers. Protected Class cBusLink '// Event published with new message Public Event NewMessage(ByVal oMessage As cMessage) '// Event published when bus is stopped Public Event StopBus() '// Flag to indicate that the bus has been '// stopped. Provides orderly shutdown Private bStopped As Boolean = False '// Method to publish a message Public Sub PublishMessage(ByVal oMessage As cMessage) If bStopped Then Exit Sub RaiseEvent NewMessage(oMessage) End Sub '// Method to stop the bus, for orderly shutdown Public Sub StopBusNow() bStopped = True RaiseEvent StopBus() End Sub End Class '// Global shared single instance of cBusLink '// used to send messages to all receivers Protected Shared oBusLink As New cBusLink '// Global shared flag indicating the bus has '// been stopped Protected Shared bStopped As Boolean = False '// /////////////////////////////////////// '// ID generator is used by other classes to '// generate unique sequence numbers Protected Class cIDGenerator Private _ID As Long = 0 Public Function NextID() As Long _ID += 1 Return _ID End Function End Class '// //////////////////////////////////// '// Public method to stop the bus before '// closedown. Ensures orderly closedown. Public Shared Sub StopBusNow() bstopped = True oBusLink.StopBusNow() End Sub End Class
cBusLink is at the core of the message bus and is responsible for delivering messages to every recipient through the
NewMessage event. As we shall see later, every
cReceiver object holds a reference to a single shared
cBusLink object and they all subscribe to its
NewMessage event. When this event is fired, every
cReceiver object is given a reference to the new message.
Objects of the
cMessage class carry the message data from sender to recipient. In our implementation, the class has only a single string payload, see Listing 2 – but you can implement sub-types of
cMessage with additional properties and methods for more sophisticated communication between senders and receivers.
Listing 2 – cMessage class
Public Class cMessage Inherits cBus '// ///////////////////////////////// '// This class is a container for allocating '// unique message ids to each mec Private Shared _oMsgID As New cIDGenerator '// Properties of the message, accessible to derived '// classes Protected _SenderRole As String = "" Protected _SenderRef As String = "" Protected _Subject As String = "" Protected _Type As String = "" Protected _Content As String = "" '// Message ID is private, it cannot be changed, '// even by derived classes Private _MsgID As Long '// ///////////////////////////// '// Default constructor used only for '// derived classes Protected Sub New() _MsgID = _oMsgID.NextID End Sub '// ///////////////////////////// '// Public constructor requires key message '// properties to be supplied. The message '// cannot be modified thereafter. Public Sub New(ByVal Sender As String, _ ByVal Subject As String, _ ByVal Type As String, _ Optional ByVal Content As String = "") _SenderRole = Sender _Subject = Subject _Type = Type _Content = Content _MsgID = _oMsgID.NextID End Sub '// ///////////////////////////////////////////////// '// Property accessors - all read-only so values '// cannot be changed by any recipient. Public ReadOnly Property SenderRole() As String Get Return _SenderRole End Get End Property Public ReadOnly Property Subject() As String Get Return _Subject End Get End Property Public ReadOnly Property Type() As String Get Return _Type End Get End Property Public ReadOnly Property MsgID() As Long Get Return _MsgID End Get End Property Public ReadOnly Property Content() As String Get Return _Content End Get End Property '// '//////////////////////////// End Class
This class implementation is mostly straightforward, but some aspects are worth looking at more closely:
- The class inherits
cBusto gain access to the protected class
cIDGeneratorwhich is declared in the base class.
- All the variables that store property values, except for MsgID, are declared Protected so that they can be accessed within in a child class. MsgID is declared Private so its value cannot be changed by a child class.
cSender and its counterpart
cReceiver do all the hard work.
cSender is the class used by a worker thread to add messages to the bus. Before we look under the hood, let’s examine the public members of the class that a sending process will use.
First, a worker process that wants to send messages must instantiate an instance of
cSender, providing the sender’s role as a parameter. The role allows for the possibility that there might be multiple worker threads performing the same role within the application. A recipient can filter messages based on the role of the sender, but does not need to know that there is more than one sender acting in that role.
: Dim oSender as New cSender("clock") :
Once instantiated, the
cSender object can be used to send messages on the bus:
: Dim oMsg as New cMessage("time", "hourchange", "10>11") oSender.SendMessage oMsg :
In this case, the message has the type "time", the subject "hourchange" and the content "10>11".
Under the hood of the
cSender implementation uses a queue to separate the sender process from the bus. When the worker thread sends a message it is written to the injector queue, from where it is picked up by a separate injector thread and published through the bus link:
The injector runs on a separate thread, so that placing a message on the bus does not hold up the worker process. The injector thread is provided by a
cThread object which runs only when messages are waiting in the injector queue.
cThread is described in more detail below.
The implementation of the
cSender class is shown in Listing 3.
Listing 3 – cSender Class
Public Class cSender Inherits cBus '// ////////////////////////////////////////// '// Queue of messages waiting to be injected '// into the message bus. Each sender has its '// own private injector queue Private _oMsgQ As New System.Collections.Generic.Queue(Of cMessage) '// ///////////////////////////////////////// '// Reference to the global BusLink instance, used '// only to pick up the BusStopped event published '// by the bus when stopped. Private WithEvents oMyBusLink As cBusLink '// ///////////////////////////////////////// '// Event to inform owner the bus has stopped Public Event Stopped() '// Sender role, used to identify the sender and '// provide the key for filtering messages '// at the receiver. Private _Role As String Public ReadOnly Property Role() As String Get Return _Role End Get End Property #Region "Construct and destruct" '// ////////////////////////////////////////// '// Constructor with role (mandatory) Public Sub New(ByVal sRole As String) _Role = sRole '// Set the reference to the buslink to the '// shared instance of the single buslink. We '// need this reference to pick up the stop event oMyBusLink = oBusLink End Sub '// ////////////////////////////////////////////// '// This method is called when the bus is closed down Private Sub oBusLink_StopBus() Handles oMyBusLink.StopBus SyncLock _oMsgQ RaiseEvent Stopped() End SyncLock End Sub #End Region #Region "Sending messages" '// ///////////////////////////////////////// '// Method used by worker thread to place a '// new default cMessage object on the injector '// queue. Public Function SendNewMessage(ByVal Type As String, _ ByVal Subj As String, _ Optional ByVal Ref As String = "", _ Optional ByVal Content As String = "") As cMessage If BusStopped Then Return Nothing Dim oM As New cMessage(_Role, Type, Subj, Ref, Content) SendMessage(oM) Return oM End Function '// ////////////////////////////////////////// '// Method used by worker thread to place message '// object on the injector queue. Public Sub SendMessage(ByVal pMessage As cMessage) If BusStopped Then Exit Sub '// We do not allow Nothing to be sent If pMessage Is Nothing Then '// Do nothing '// We could throw an error here Else SyncLock _oMsgQ _oMsgQ.Enqueue(pMessage) '// Start the thread only if '// one message on the queue. If _oMsgQ.Count = 1 Then _oInjectorThread.Start() End If End SyncLock End If End Sub '// //////////////////////////////////////// '// Holds up the caller thread until all the messages '// have been injected into the bus Public Sub Flush() Do Until _oMsgQ.Count = 0 Threading.Thread.Sleep(2) Loop End Sub #End Region #Region "Message Injector" '// ////////////////////////////////////////// '// Functions run by the thread for injecting messages '// into the bus. The thread runs only when at '// least one message is waiting in the injector queue. Private WithEvents _oInjectorThread As New cThread '// ////////////////////////////////////////// '// Injector Thread fires Run event to place '// messages on the queue Private Sub _oInjectorThread_Run() Handles _oInjectorThread.Run InjectMessagesNow() End Sub '// /////////////////////////////////////////// '// When the injector thread runs, this function '// is called to push all the queued messages into '// the bus. Private Sub InjectMessagesNow() Dim oM As cMessage '// Loop until all messages in the '// queue have been injected into the '// bus. Do '// Check if stopped flag was set while '// going round loop. If BusStopped Then Exit Sub '// Get the next message off the '// injector queue SyncLock _oMsgQ If _oMsgQ.Count > 0 Then oM = _oMsgQ.Dequeue() Else oM = Nothing End If '// Release the lock so that the worker '// process can add new messages to '// the queue while we are publishing '// this message on the bus End SyncLock If oM Is Nothing Then '// Queue is empty, so finish the '// loop Exit Do End If '// Now we have got the message, we can '// send it using the single global '// cBusLink which is instantiated in the '// base class cBus. SyncLock oBusLink oBusLink.PublishMessage(oM) End SyncLock Loop End Sub #End Region Protected Overrides Sub Finalize() '// Close down the injector thread _oInjectorThread.StopThread() MyBase.Finalize() End Sub End Class
SendMessage is used by a worker process to place messages on the injector queue. The queue class is not threadsafe, so
SyncLock is used to protect the queue from simultaneous use by another thread. The injector thread is started only when a message is added to an empty queue, and this fires the event
The private method
_oInjectorThread_Run handles the injector thread
Run event. The method takes all the waiting messages from the injector queue, placing them in turn on the bus by using the BusLink’s
PublishMessage method. When the method exits, the thread is blocked in within
cThread until another message is placed on the empty queue. If a message is added to the injector queue while an earlier message is being sent on the bus, it will be included in the sending loop without needing the
Run event to fire again.
Objects of this class are used by worker processes to receive messages from the bus.
The process that creates the
cReceiver object can choose to set filters so that only relevant messages are delivered. More detail on filtering is given below.
When the receiver object connects to the bus, it sets its own private member variable
_BusLinkRef to refer to the shared member
_BusLinkRef is declared
WithEvents so that the
NewMessage event of the
cBusLink can be handled.
The thread that owns the receiver can set a
cFilter object on the receiver. Then every message received through the
NewMessage event is checked against the filter and, if it passes, it is added to the receiver’s incoming message queue, waiting to be delivered. The filter can be changed during the run.
Messages are delivered and processed in one of three ways:
- The worker thread calls
GetNextMessageto return the next message from the queue. If there are no messages waiting, the method returns Nothing.
- The worker thread calls DeliverMessages to deliver all queued messages through the
MessageReceivedevent. The events are raised on the worker thread.
- The creator/owner calls the
StartAsyncmethod to request that the receiver object provides a separate worker thread to raise the
MessageReceivedevent, when new messages arrive. The event is raised on a thread provided by a
DeliverMessages means that the receiver worker thread must set up its own processing loop, for example by having its own timer to repeat the loop. This is appropriate when, for example, the thread needs to interact with the GUI – using a Timer component on a form could provide the thread.
In contrast, using
StartAsync means that the
cReceiver object will create its own internal worker thread that raises the
Listing 4 – cReceiver class
Public Class cReceiver Inherits cBus '// ////////////////////////////////////// '// Id generator for all cReceiver objects Private Shared _oRecId As New cIDGenerator '// ////////////////////////////////////// '// Event used to deliver a message to the '// message handler function Public Event MessageReceived(ByVal oMessage As cMessage) '// ////////////////////////////////////// '// Event used to indicate the bus has stopped, '// used to ensure orderly shutdown of the bus Public Event Stopped() Public ReadOnly Property IsStopped() As Boolean Get Return BusStopped End Get End Property '// ////////////////////////////////////// '// Message queue holding the messages '// waiting to be delivered Private _MQueue As New System.Collections.Generic.Queue(Of cMessage) '// /////////////////////////////////////////// '// Filter set by the recipient to select '// messages. Fileter can be by specific role(s), '// subjects(s) or type(s) or using more specialised '// filters. Filters can be changed at any time. The '// default no filter allows all messages through. Public Filter As cFilter = Nothing '// ////////////////////////////////////////// '// Reference to the single global buslink '// so that the receiver can pick up published '// messages from the bus Private WithEvents _BusLinkRef As cBusLink '// Flag to indicate that this object has been '// finalised and is closing. Private _Closing As Boolean = False Private _RaiseStopEvent As Boolean = False '// ///////////////////////////////////////// '// Unique identifier of this receiver object Private _ID As Long '// ///////////////////////////////////////// '// Counts of number of messages received '// and delivered Private _BCount As Long = 0 ' Messages from the Bus Private _RCount As Long = 0 ' Messages received onto the queue Private _DCount As Long = 0 ' Messages delivered to the worker '// ////////////////////////////////// '// Constructor Public Sub New() _ID = _oRecId.NextID End Sub '// /////////////////////////////////// '// Establishes connection to the bus so that '// message delivery can start Public Sub Connect() '// ///////////////////////////////////////// '// Set the buslink variable to refer to the '// shared buslink so that it delivers '// messages through the event handler _BusLinkRef = oBusLink '// NOTE: oBus is a direct reference to '// the protected shared class member. End Sub '// //////////////////////////////////////// '// Breaks the connection with the bus '// so that messages are no longer '// received. Public Sub Disconnect() _BusLinkRef = Nothing End Sub '// ///////////////////////////////// '// Accessor methods for the readonly '// properties Public ReadOnly Property BCount() As Long '// Bus message count Get Return _BCount End Get End Property Public ReadOnly Property RCount() As Long '// Received message count Get Return _RCount End Get End Property Public ReadOnly Property DCount() As Long '// Delivered message count Get Return _DCount End Get End Property Public ReadOnly Property QCount() As Long '// Queued (waiting) message count Get If _MQueue IsNot Nothing Then Return _MQueue.Count Else Return 0 End If End Get End Property Public ReadOnly Property ID() As Long '// Unique ID number of this receiver Get Return _ID End Get End Property Public Function MessagesWaiting() As Boolean '// Helper property returns true if there '// are messages waiting Return QCount > 0 End Function #Region "Message arrival" '// ////////////////////////////////// '// This method handles the new message '// event from the bus. The message is '// queued for delivery. Private Sub oBusLink_NewMessage( _ ByVal oMessage As cMessage _ ) Handles _BusLinkRef.NewMessage '// Discard message if closing, or the bus has stopped If _Closing Then Exit Sub If BusStopped Then Exit Sub _BCount += 1 '// //////////////////////////// '// Check against the filter. '// The message must be included by the filter '// otherwise it will not be delivered. Select Case True Case Filter Is Nothing, Filter.bInclude(oMessage) '// /////////////////////////////// '// New message has passed the filter, so '// add it to the message queue waiting '// for delivery to the worker process. AddToQueue(oMessage) End Select End Sub '// //////////////////////////////// '// Method used to add messages '// to the message queue when they arrive '// from the message bus. Private Sub AddToQueue(ByVal oMessage As cMessage) '// //////////////////////////////////////////// '// Check if the queue exists - if not, then '// exit without adding a message. If _MQueue Is Nothing Then Exit Sub '// //////////////////////////////////////////// '// Check if closing or stopped, if so exit If BusStopped Then Exit Sub If _Closing Then Exit Sub Dim bStartDelivery As Boolean '// //////////////////////////////////////////// '// SyncLock the queue to guarantee exclusive '// access, then add the message SyncLock _MQueue _RCount += 1 _MQueue.Enqueue(oMessage) '// //////////////////////////////////////////////// '// We start the delivery thread if async AND '// this is the first message in the queue bStartDelivery = _AsyncMode And _MQueue.Count = 1 End SyncLock '// ////////////////////////////// '// Check if we need to start the delivery thread '// which we do only in async mode and if this is '// the first message in the queue If bStartDelivery Then _DeliveryThread.Start() End If End Sub #End Region #Region "Message delivery" '// //////////////////////////////// '// '// Message delivery can be made in these '// ways: '// * Asynchronously on a provided thread '// - call StartAsync to enable this '// - messages are delivered through MessageReceived event '// '// * By a call from the worker thread '// - use GetNextMessage to retrieve the message '// '// GetNextMessage returns the next '// message as the function result. '// It returns Nothing if '// there is no message in the queue '// '// //////////////////////////////// '// Delivery thread is used with asynch delivery only Private WithEvents _DeliveryThread As cThread = Nothing Private _AsyncMode As Boolean = False '//////////////////////////////////// '// Starts Asynchronous delivery through the NewMessage event. '// Called by the creator/owner to initiate a new thread delivering '// messages from this receiver. Public Sub StartAsync() '// Do nothing if closing, stopped or already in asyinc mode. If _Closing Then Exit Sub If BusStopped Then Exit Sub If _AsyncMode Then Exit Sub _AsyncMode = True '// Create and start the delivery thread. If _DeliveryThread Is Nothing Then _DeliveryThread = New cThread _DeliveryThread.Start() End Sub '// /////////////////////////////////////////////// '// Picks up the next message from the queue '// if any and returns it. Returns Nothing '// if there is no message. Public Function GetNextMessage() As cMessage '// Do not return anything if closing or stopped If _Closing Then Return Nothing If BusStopped Then Return Nothing Dim oM As cMessage '// Lock the queue and get the next message SyncLock _MQueue If _MQueue.Count > 0 Then oM = _MQueue.Dequeue _DCount += 1 Else oM = Nothing End If End SyncLock '// Return the message (if any) Return oM End Function '// /////////////////////////////////////////////// '// This event handler is called when the thread runs '// - only when messages are waiting to be delivered in '// async mode Private Sub _DeliveryThread_Run() Handles _DeliveryThread.Run DeliverWaitingMessages() End Sub '// /////////////////////////////////////////////// '// Delivers all the messages in the incoming '// message queue using the MessageReceived event Public Sub DeliverWaitingMessages() '// Raise the stop event if the bus has been stopped If BusStopped Then '// Inform the delivery thread If _RaiseStopEvent Then RaiseEvent Stopped() _RaiseStopEvent = False End If Exit Sub End If '// Do nothing if closing If _Closing Then Exit Sub '// The queue may be nothing , so simply '// exit and try again on the cycle If _MQueue Is Nothing Then Exit Sub Dim oM As cMessage '// Retrieve all the messages and deliver them '// using the message received event. Do '// Lock the queue before dequeuing the message SyncLock _MQueue If _MQueue.Count > 0 Then oM = _MQueue.Dequeue Else oM = Nothing End If End SyncLock '// /////// '// After releasing the lock we '// can deliver the message. If oM IsNot Nothing Then _DCount += 1 RaiseEvent MessageReceived(oM) End If '// If the queue was not empty then loop back for the '// next message Loop Until oM Is Nothing End Sub #End Region #Region "Stats Report" '//////////////////////////////////////////////// '// This sub simply publishes a message of '// stats about this receiver. Public Sub StatsReport() If BusStopped Then Exit Sub Dim sRpt As String sRpt = "Report from Receiver #" & Me.ID sRpt &= "|BUS=" & _BCount sRpt &= "|REC=" & _RCount sRpt &= "|DEL=" & _DCount sRpt &= "|Q=" & _MQueue.Count sRpt &= "|Closing=" & _Closing Dim s As New cSender("Receiver#" & ID) s.SendNewMessage("STATS", "STATS", sRpt) s.Flush() s = Nothing End Sub #End Region '// /////////////////////////////////// '// Handler for the stopbus event. Do '// not deliver any more messages once the '// bus has been stopped. Private Sub oBusLinkRef_StopBus() Handles _BusLinkRef.StopBus _Closing = True '_DeliveryTimer = Nothing _AsyncMode = False _RaiseStopEvent = True End Sub '// //////////////////////////////////// '// Finalise to tidy up resources when being disposed Protected Overrides Sub Finalize() _DeliveryThread.StopThread() _Closing = True _AsyncMode = False _MQueue = Nothing MyBase.Finalize() End Sub End Class
cThread class provides a thread and the control methods needed to block and release the thread as required.
By default, the thread is blocked. The class provides a method,
Start, which unblocks the thread. The thread immediately raises the
Run event to carry out the processing required, and then blocks again until the
Start method is called again, when it repeats the
In our message bus,
cThread is used in
cSender to inject messages onto the bus, and in
cReceiver to deliver messages, when operating in Async mode. In both of these classes the
Run event handler picks messages off a queue until it is empty, then exits. It is quite likely that new messages are added to the queue while the handler is running, and these are picked up in the handler loop. Eventually, the queue is empty and if Start has not been called again, the thread blocks until it is.
The implementation of the class is shown in Listing 5.
Listing 5 – cThread class
Public Class cThread Inherits cBus Private WithEvents _BusLinkRef As cBusLink = oBusLink Private Shared iThreadCount As Long = 0 '// Event fired to execute the thread's '// assigned processes. Public Event Run() '// Thread object provides the thread Private _Thread As New Thread(AddressOf RunThread) '// Signal object to block the thread '// when there are no messages to be delivered Private _Signal As New EventWaitHandle(False, EventResetMode.AutoReset) '// Flag to indicate thread has been stopped Private bThreadStopped As Boolean = False '// Start the thread on creation of the object Public Sub New() _Thread.Start() End Sub '// Start called by owner to '// unblock this thread. Public Sub Start() If _Thread.ThreadState = ThreadState.Unstarted Then _Thread.Start() SyncLock Me _Signal.Set() End SyncLock End Sub '// Stop called by owner to close '// down thread Public Sub StopThread() bThreadStopped = True _Signal.Set() End Sub '// Method executed by the thread. This is '// a repeated loop until the bus is stopped Private Sub RunThread() Do '// The signal blocks the thread until '// it is released by the Start method _Signal.WaitOne() If bThreadStopped Then Exit Sub End If '// Raise the thread event that will '// do the work. RaiseEvent Run() Loop End Sub Private Sub _BusLinkRef_StopBus() Handles _BusLinkRef.StopBus StopThread() End Sub End Class
cFilter objects are used by
cReceiver to apply filtering to incoming messages. The base
cFilter class is declared Must Override so cannot be instantiated. It is only by defining a child class to apply some filtering logic that messages get filtered. This is how it works:
- The base class,
cFilterdefines a Protected Must Override method
bMatches, which takes a
Messageobject as a parameter. In a child class this method is overridden to implement specific filtering logic.
cFilterdefines a Public method,
bInclude, which takes a message object as a parameter and returns true if the message is to be included, and false if not. This is the method used by
cReceiverto check if a message passes the filter. Apart from testing its own
bMatchesvalue, this method also contains the logic to check other
cFilterobjects that have been attached in And / Or collections.
- Four further methods,
Or_Notprovide the means to add other filter objects to the And/Or collections of this filter.
Or_ etc. methods makes it easy to build compound logical conditional tests using basic filter components. For example, if I have two filter objects FilterA and FilterB, they can be combined as FilterA.Or_(FilterB), or FilterA.And_(FilterB). It is also possible to combine several chains of filters. For example, FilterA.And_(FilterB.Or_Not(FilterC)) implements the filter condition A and (B or not C).
Actual filtering classes implemented
Various specialised classes of
cFilter are implemented to provide filtering on sender role, message type and subject. These include, for example,
cSubjectEquals. As their names suggest, these filters check that the key fields of the message match a given string.
A worker process that uses
cReceiver can apply filters to the incoming message simply by setting the Filter property of the receiver:
: Dim oReceiver as new cReceiver oReceiver.Filter = new cRoleEquals("monitor") :
Inside cFilter and its derived classes
cFilter class defines the protected MustOverride method
bMatches. The derived classes override
bMatches, providing the appropriate code to determine the match. For example, in the case of the
cSubjectContains class, the overriding
bMatches method is:
: Public Overrides Function bMatches(ByVal oMessage As cMessage) As Boolean Return oMessage.Subject.Contains(FilterString) End Function :
If you need to have a more specialised filtering mechanism in your application, it is easy to define a derived class of
cFilter that implements whatever logic you need in bMatches.
Listing 6 – cFilter class and derived classes
'// The filter base class is used to implement '// message filtering on incoming messages '// at each receiver. Filters can be grouped in '// AND and OR groups - the message is '// included if it matches all filters in the '// AND group or any filter in the OR group. Public MustInherit Class cFilter Inherits cBus '// A collection of filters which this filter must AND '// with to allow the message through Private oAnds As New System.Collections.Generic.List(Of cFilter) '// A collection of filters which this filter must OR '// with to allow the message through Private oOrs As New System.Collections.Generic.List(Of cFilter) '// Check if the message is included by this filter Public Function bInclude(ByVal oMessage As cMessage) As Boolean Dim bResult As Boolean '// First, test this filter alone bResult = bMatches(oMessage) Dim oFF As cFilter '// If this filter matches, then check all the '// ANDs to see if they also match If bResult Then For Each oFF In oAnds bResult = oFF.bMatches(oMessage) '// As soon as we find the first failure to '// match we know the result is a non-match '// for this filter and all its ANDs If Not bResult Then Exit For Next End If '// If all the ANDS were true, then the whole result '// is true regardless of the OR result. If bResult Then Return True '// The ANDs did not match, so now '// we find if any one OR matches, and if so '// the result is true For Each oFF In oOrs bResult = oFF.bInclude(oMessage) If bResult Then Return True Next oFF '// No match on any of the ORS, so '// the message does not match this filter Return False End Function '// /////////////////////////////////// '// This method must be overridden in child '// classes to implement the matching test. Protected MustOverride Function bMatches( _ ByVal omessage As cMessage) As Boolean '// /////////////////////////////////// '// These methods add a given filter to the '// ANDs or ORs collections to build filtering '// logic. Public Function And_(ByVal oFilter As cFilter) As cFilter oAnds.Add(oFilter) Return Me End Function Public Function Or_(ByVal ofilter As cFilter) As cFilter oOrs.Add(ofilter) Return Me End Function Public Function Or_Not(ByVal ofilter As cFilter) As cFilter oOrs.Add(Not_(ofilter)) Return Me End Function Public Function And_Not(ByVal oFilter As cFilter) As cFilter oAnds.Add(Not_(oFilter)) Return Me End Function '// '// /////////////////////////////////////// '// /////////////////////////////////////// '// Class and function to provide negation '// of a filter condition Private Class cNot Inherits cFilter Private oNotFilter As cFilter Public Sub New(ByVal oFilter As cFilter) oNotFilter = oFilter End Sub Protected Overrides Function bMatches(ByVal omessage As cMessage) As Boolean Return Not oNotFilter.bMatches(omessage) End Function End Class Private Function Not_(ByVal oFilter As cFilter) As cFilter Return New cNot(oFilter) End Function '// '// ///////////////////////////////////////////// End Class #Region "Filter implementations" '// ///////////////////////////////////////// '// Derived specialised classes for implementing '// different specific filters. Public Class cTypeContains Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.Type.Contains(FilterString) End Function End Class Public Class cTypeEquals Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.Type = FilterString End Function End Class Public Class cRoleContains Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.SenderRole.Contains(FilterString) End Function End Class Public Class cRoleEquals Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.SenderRole = FilterString End Function End Class Public Class cSubjectContains Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.Subject.Contains(FilterString) End Function End Class Public Class cSubjectEquals Inherits cFilter Public FilterString As String Public Sub New(ByVal sFilter As String) FilterString = sFilter End Sub Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.Subject = FilterString End Function End Class Public Class cRoleTypeSubjectFilter Inherits cFilter Public sRole As String = "" Public sType As String = "" Public sSubject As String = "" Protected Overrides Function bMatches( _ ByVal oMessage As cMessage) As Boolean Return oMessage.Type = sType _ And oMessage.SenderRole = sRole _ And oMessage.Subject = sSubject End Function End Class '// '/////////////////////////////////////////////// #End Region
A demo application
The demo application included in the zip file is a simple windows forms application that includes a number of components that communicate with each other via the MessageBus:
- The main control form provides buttons for opening the other form types
- A mouse tracker form, that monitors mouse movements over the form and sends mouse movement messages on the bus
- A clock object that sends a time message whenever the time ticks past a tenth of a second, a second, a minute or an hour.
- A mouse follower form, that monitors mouse movement messages from the bus and positions a red box on the form at the position indicated by the message. This form also receives clock events from the bus and displays the time, as sent out by the clock object.
- A message sender form, which can generate bus messages of different types at a frequency set by the user
- A message receiver form, that lists messages received, optionally filtered on attributes set by the user
The user can open as many sender forms, receiver forms and mouse follower forms as they wish, and can set the message types to be sent and received. Each of the forms operates independently of the others. | http://www.developerfusion.com/article/145371/an-internal-application-message-bus-in-vbnet/ | 13 |
116 | The exponential function is one of the most important functions in mathematics. It is written as exp(x) or ex, where e is the base of the natural logarithm.
As a function of the real variable x, the graph of ex is always positive (above the x axis) and increasing (viewed left-to-right). It never touches the x axis, although it gets arbitrarily close to it (thus, the x axis is a horizontal asymptote to the graph). Its inverse function, the natural logarithm, ln(x), is defined for all positive x.
Sometimes, especially in the sciences, the term exponential function is reserved for functions of the form kax,
where a, called the base, is any positive real number. This article will focus initially on the exponential function with base e.
In general, the variable x can be any real or complex number, or even an entirely different kind of mathematical object; see the formal definition below.
Using the natural logarithm, one can define more general exponential functions. The function
defined for all a > 0, and all real numbers x, is called the exponential function with base a.
Note that the equation above holds for a = e, since
Exponential functions "translate between addition and multiplication" as is expressed in the following exponential laws:
These are valid for all positive real numbers a and b and all real numbers x and y. Expressions involving fractions and roots can often be simplified using exponential notation because:
and, for any a > 0, real number b, and integer n > 1:
Derivatives and differential equations
The importance of exponential functions in mathematics and the sciences stems mainly from properties of their derivatives. In particular,
That is, ex is its own derivative, a property unique among real-valued functions of a real variable. Other ways of saying the same thing include:
- The slope of the graph at any point is the height of the function at that point.
- The rate of increase of the function at x is equal to the value of the function at x.
- The function solves the differential equation y′ = y.
In fact, many differential equations give rise to exponential functions, including the Schrödinger equation and the Laplace's equation as well as the equations for simple harmonic motion.
For exponential functions with other bases:
Thus any exponential function is a constant multiple of its own derivative.
If a variable's growth or decay rate is proportional to its size — as is the case in unlimited population growth (see Malthusian catastrophe), continuously compounded interest, or radioactive decay — then the variable can be written as a constant times an exponential function of time.
The exponential function ex can be defined in two equivalent ways, as an infinite series:
or as the limit of a sequence:
In these definitions, n! stands for the factorial of n, and x can be any real number, complex number, element of a Banach algebra (for example, a square matrix), or member of the field of p-adic numbers.
For further explanation of these definitions and a proof of their equivalence, see the article Definitions of the exponential function.
On the complex plane
When considered as a function defined on the complex plane, the exponential function retains the important properties
for all z and w.
It is a holomorphic function which is periodic with imaginary period 2πi and can be written as
where a and b are real values. This formula connects the exponential function with the trigonometric functions and to the hyperbolic functions. Thus we see that all elementary functions except for the polynomials spring from the exponential function in one way or another.
See also Eulers formula in complex analysis Euler's formula.
Extending the natural logarithm to complex arguments yields a multi-valued function, ln(z). We can then define a more general exponentiation:
for all complex numbers z and w. This is also a multi-valued function. The above stated exponential laws remain true if interpreted properly as statements about multi-valued functions.
The exponential function maps any line in the complex plane to a logarithmic spiral in the complex plane with the center at the origin. This can be seen by noting that the case of a line parallel with the real or imaginary axis maps to a line or circle.
Matrices and Banach algebras
The definition of the exponential function given above can be used verbatim for every Banach algebra, and in particular for square matrices. In this case we have
- ex + y = exey if xy = yx
- e0 = 1
- ex is invertible with inverse e−x
In addition, the derivative of exp at the point x is that linear map which sends u to u · ex.
In the context of non-commutative Banach algebras, such as algebras of matrices or operators on Banach or Hilbert spaces, the exponential function is often considered as a function of a real argument:
- f(t) = etA
where A is a fixed element of the algebra and t is any real number. This function has the important properties
- f(s + t) = f(s)f(t)
- f(0) = 1
- f'(t) = Af(t)
On Lie algebras
The "exponential map" sending a Lie algebra to the Lie group that gave rise to it shares the above properties, which explains the terminology. In fact, since R is the Lie algebra of the Lie group of all positive real numbers with multiplication, the ordinary exponential function for real arguments is a special case of the Lie algebra situation. Similarly, since the Lie algebra M(n, R) of all square real matrices belongs to the Lie group of all invertible square matrices, the exponential function for square matrices is a special case of the Lie algebra exponential map.
Double exponential function
The term double exponential function can have two meanings:
- a function with two exponential terms, with different exponents
- a function f(x)=a^a^x; this grows even faster than an exponential function; for example, if a=10: f(-1)=1.26, f(0)=10, f(1)=1e10, f(2)=1e100=googol, f(3)=1e1000, ..., f(100)=googolplex.
Compare the super-exponential function, which grows even faster. | http://www.biologydaily.com/biology/Exponential_function | 13 |
52 | Evaluating Results: Statistics, Probability, and Proof (page 2)
When you do an experiment or a survey comparing two sets of people or things, the job of statistics is to show whether you have a significant difference between the two sets. A small difference between the averages or means mayor may not be significant. How do you decide?
The experts use a system that has two basic stages:
- They examine the data to find out how much variation there already is among the specimens.
- They use that variation as a basis for deciding that the experimental difference, or survey difference, is enough to be a significant difference.
Another useful statistic is the median. In the test of reaction times by the ruler-drop method, we asked each partner to measure five catches by the other partner, then took the average, or mean. We might instead have chosen the number in the middle, which is called the median. It can often be as useful as the mean and takes less time to calculate.
Statistically Meaningful Results
The example of taking five measurements across a room in the measurement chapter may be thought of as a simple method among all the possible measurements that could be made. We could also set up a program of making many such measurements, or of many people each making many measurements, so that one might eventually have thousands or millions of measurements.
The large, unknown number of measurements of which anyone measurement is considered a sample can be known only from the sample. This is like eating cookies from a cookie jar. No matter how enjoyable the first, second, or third, we will never know how good the remaining cookies are from the samples only. We can only predict or infer that the uneaten cookies, the population from which the sample came, are like the sample.
How do scientists judge whether their sample of measurements (or findings expressed other ways) fairly represents all possible measurements? Let's say that Alice is doing an experiment as her science project in which she has planted popcorn seeds in two planters to test the value of a fertilizer. She is going to compare the two plantings, one with the fertilizer and the other without but otherwise grown under uniform conditions. To keep the numbers small for quick, easy measuring, let's say that each planter has five healthy, growing plants. Suppose that Alice measures the heights of the five plants in one of the planters and finds the following:
- Plant A 57.2 cm
- Plant B 57.2 cm
- Plant C 57.2 cm
- Plant D 57.2 cm
- Plant E 57.2 cm
What? All the same? Most of us who have had experience with growing things would immediately say that this is highly improbable, that it is just a coincidence that all the plants would be precisely the same height. Correct! It is a matter of chance or probability. Probability, you will find, is the main theme in the evaluation of scientific findings. Suppose, now, that Alice's measuring had brought the following results:
- Plant A 57.9 cm
- Plant B 55.7 cm
- Plant C 58.4 cm
- Plant D 59.2 cm
- Plant E 57.3 cm
"That's more like it," we would say. We expect differences in things, especially in living, growing things. That is, it is highly probable that the heights would not be all the same.
Now, whether we like the sample or not, it is all we know about the larger population of plants that Alice's supply of seed might grow. Suppose, again, that Ken planted 100 seeds from the same supply as Alice's and under very much the same conditions. Then suppose he went to work measuring them at the same stage as Alice's plants. We would like to see how the sizes vary in this much larger sample, so we make a frequency distribution (see figure l3.1) showing the sizes. That is, an "X" mark is made for each corn plant over its height measurement, which is listed along the bottom of the chart.
We see that there are not many of the shortest and tallest plants but more of each size in the middle of the range. If we drew a line over the tops of the columns of sizes (and if we had many more specimens measured and recorded) the lines, or line graphs, would look something like the one in figure 13.2.
Such a distribution of a large number of things (and it must be large, preferably in the thousands) is called a normal distribution. Many things show normal distribution when they are measured and graphed like this, for example, the heights of large numbers of people picked at random and the amounts of food eaten per person per year. This widespread nature of things to show normal distribution has been used by scientists and statisticians to work out ever more meaningful designs for science investigations. Most modern scientists are thinking about the statistics they will use to analyze their findings from the beginning or planning stages of their investigations. They are saying something like this: "I don't want my experiment to come out as some queer, quirky thing that proves nothing. How must I plan now so that in the end my results will be statistically meaningful?" Scientists know, however, that there can be no perfect answer to their questions. They can always, just by chance, get results that show unexpected quirks.
Nevertheless, as a scientist does her investigation, she is trying to uncover some meaningful results. This means more than just saying, "Yes" or "No" to the hypothesis. It means going beyond the small number of subjects she may be dealing with in her experiment or survey. It means having confidence that her findings may be stretched, or generalized, to any larger group of similar subjects. Did ingredient Q seem to prevent sunburn in the experimental group of people who used it? If so, and if that experimental group fairly represents the larger popu1ation, we may then reasonably expect that ingredient Q will prevent sunburn in most of the larger population.
The use of random choices in the first stages of an investigation means more than just helping to keep the scientist's prejudices from affecting the results. It helps to assure that the sample of people, or other subjects, used in the investigation will allow us to generalize to the larger group that the sample is intended to represent.
Can You Prove It?
Let's say that Alice is doing an experiment as her science project in which she has planted popcorn seeds in two planters to test the value of a fertilizer. She uses the controlled experiment design.
To the experimental group she adds a chemical fertilizer, urea, a nitrogen compound that may be put into the soil or dissolved in the water given the plants. Her independent variable is the addition of the urea to the experimental group. Her dependent variable, if she observes one, is the difference in growth rate (height or weight) of the plants in her two planters.
At a proper time in her experiment, she measures the heights of the plants with the following results:
We see that there is a difference between the means (commonly called average) of the two groups. The difference is 1.9 cm in favor of the experimental group; the average height of the plants in that group is 1.9 cm taller than the height of the plants in the control group. This looks good.
"See!" Alice says. "Adding urea to the experimental planting has made the corn grow faster." Can she be sure of this? No, she cannot. Maybe it was a chance happening that she got five taller growing plants in the experimental group and five shorter growing plants in the control. She should not make any decision just yet. She should get someone to make a good statistical treatment (unless she can do it herself) that would go beyond comparing the mean heights of the two groups.
A statistical analysis would show how much the heights vary among themselves. Then it would show how the means compare with a larger "population" of plants like Ken's 100 plants. Where would this larger population be found? It would be imagined, inferred, or hypothetical: it would be created out of the variability, the range, the scatter of her sample and the size of the sample. It would be created by the use of equations in statistics books.
Furthermore, a judgment would be made about the chance, or the probability, that the difference Alice found was or was not simply a chance difference. This, too, would be done by reference to appropriate tables in statistics books. Actually, the number of plants in Alice's experiment is too small (only five) to make it worth all of that analysis, yet her results are supported by agricultural research by professional scientists and by the experiences of the thousands of farmers who have found it useful to apply urea and other nitrogen compounds to their corn plantings.
With all of that support, why wouldn't scientists declare that they have proven the value of this treatment of corn? The problem lies partly in this question: How can you know when you have proven a thing to be true? And it lies partly in the way the words "prove" and "true" are used in mathematics and logic as compared to the way they are used in ordinary speech.
First, the mathematics and logic. You and I can agree that this is a true statement in arithmetic: 148 + 293 + 167 = 608. That is, we follow certain rules of mathematics to prove whether the statement is an equality. Mathematicians would not agree, however, that we had proven it by following the rules of addition. They are more concerned about the sources of those rules. In the end, they would show that the statement was proven by agreeing on certain things about arithmetic and its rules.
In logic of the formal sort, proof would be much the same, as in this example:
- If all wangtups have gitly speekrongs,
- And if Q is a wangtup,
- Then Q has gitly speekrongs.
Even though the statements do not mean anything in real life, if we accept the first and second statements as true, then the conclusion, the third statement, is also true. The "proof" is all right there in the statement. It has nothing to do with real people or things and their mixed-up ways.
Still, these simple examples do not do justice to mathematics and logic. Both are fascinating and powerful tools of thought or reasoning that humankind has created. The proof or truth of these examples, however, is so very much different from the kinds of proof that scientists are seeking that it becomes awkward to try to use the same language to describe them all. Even though mathematicians and logicians got there first with the terms "prove" and "true," scientists in recent times have pulled away from using these terms.
In ordinary experience as well there is a problem with these key words. Most people would say, "See, Alice proved it! It is true that urea makes corn grow faster." Or they might say, "That proves it! Hocus is better for a headache than Pocus," even though they may have used the medication only one time and their test has serious weaknesses. Or, again: "That proves it! Dreams do foretell the future. I knew that you were coming because I dreamed about it!"
These difficulties with the language, however, do not provide the main objection to the use of "prove" in scientific work. When we talk about "proving" something in science we are, in effect, predicting the future as well as examining the present. How much can we depend on something happening in the future just because today's scientific findings show it to be probable now?
In Alice's experiment, for example, she used only five plants in each planter. Such a small sample cannot tell us much about the larger population of future corn plantings, no matter how much statistical analysis we apply to it. However, let's do some more analysis of Alice's results to see how this helps us to learn about the predictive value of her findings. Let's rearrange the measurements of the corn plants according to height (see table 13.2).
Does this tell us more than a simple comparison of the means? Suppose her results in the experimental group had been as in table 13.3 (also ranked by height).
Here we see that the difference between the means of the two groups is the same as in table 13.2. But notice the range of heights in table 13.3. The experimental plants are not as uniformly taller than the control plants as they were in table 13.2. There is more variability. These results would provide a less reliable basis for predicting about future plantings.
I hope that you begin to agree, if you had not already known, that statistical treatment of data can reveal useful information. Finding the means and their difference is statistical analysis. Ranking the heights and comparing the pairs of plants is statistical analysis. These two ways of analyzing data are very elementary (even antiquated) when compared with the methods used by people with more mathematical and statistical knowledge.
Add your own comment
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- 10 Fun Activities for Children with Autism
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- Should Your Child Be Held Back a Grade? Know Your Rights
- Bullying in Schools
- First Grade Sight Words List
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working | http://www.education.com/reference/article/evaluating-results-statistics-probability-proof/?page=2 | 13 |
68 | The try, except, finally and raise statements
A well-written program should produce valuable results even when exceptional conditions occur. A program depends on numerous resources: memory, files, other packages, input-output devices, to name a few. Sometimes it is best to treat a problem with any of these resources as an exception, which interrupts the normal sequential flow of the program.
In Exception Semantics we introduce the semantics of exceptions. We’ll show the basic exception-handling features of Python in Basic Exception Handling and the way exceptions are raised by a program in Raising Exceptions.
We’ll look at a detailed example in An Exceptional Example. In Complete Exception Handling and The finally Clause, we cover some additional syntax that’s sometimes necessary. In Exception Functions, we’ll look at a few standard library functions that apply to exceptions.
We descibe most of the built-in exceptions in Built-in Exceptions. In addition to exercises in Exception Exercises, we also include style notes in Style Notes and a digression on problems that can be caused by poor use of exceptions in A Digression.
An exception is an event that interrupts the ordinary sequential processing of a program. When an exception is raised, Python will handle it immediately. Python does this by examining except clauses associated with try statements to locate a suite of statements that can process the exception. If there is no except clause to handle the exception, the program stops running, and a message is displayed on the standard error file.
An exception has two sides: the dynamic change to the sequence of execution and an object that contains information about the exceptional situation. The dynamic change is initiated by the raise statement, and can finish with the handlers that process the raised exception. If no handler matches the exception, the program’s execution effectively stops at the point of the raise.
In addition to the dynamic side of an exception, an object is created by the raise statement; this is used to carry any information associated with the exception.
Consequences. The use of exceptions has two important consequences.
First, we need to clarify where exceptions can be raised. Since various places in a program will raise exceptions, and these can be hidden deep within a function or class, their presence should be announced by specifying the possible exceptions in the docstring.
Second, multiple parts of a program will have handlers to cope with various exceptions. These handlers should handle just the meaningful exceptions. Some exceptions (like RuntimeError or MemoryError) generally can’t be handled within a program; when these exceptions are raised, the program is so badly broken that there is no real recovery.
Exceptions are a powerful tool for dealing with rare, atypical conditions. Generally, exceptions should be considered as different from the expected or ordinary conditions that a program handles. For example, if a program accepts input from a person, exception processing is not appropriate for validating their inputs. There’s nothing rare or uncommon about a person making mistakes while attempting to enter numbers or dates. On the other hand, an unexpected disconnection from a network service is a good candidate for an exception; this is a rare and atypical situation. Examples of good exceptions are those which are raised in response to problems with physical resources like files and networks.
Python has a large number of built-in exceptions, and a programmer can create new exceptions. Generally, it is better to create new exceptions rather than attempt to stretch or bend the meaning of existing exceptions.
Exception handling is done with the try statement. The try statement encapsulates several pieces of information. Primarily, it contains a suite of statements and a group of exception-handling clauses. Each exception-handling clause names a class of exceptions and provides a suite of statements to execute in response to that exception.
The basic form of a try statement looks like this:
except exception 〈 , target 〉 : suite
Each suite is an indented block of statements. Any statement is allowed in the suite. While this means that you can have nested try statements, that is rarely necessary, since you can have an unlimited number of except clauses on a single try statement.
If any of the statements in the try suite raise an exception, each of the except clauses are examined to locate a clause that matches the exception raised. If no statement in the try suite raises an exception, the except clauses are silently ignored.
The first form of the except clause provides a specific exception class which is used for matching any exception which might be raised. If a target variable name is provided, this variable will have the exception object assigned to it.
The second form of the except clause is the “catch-all” version. This will match all exceptions. If used, this must be provided last, since it will always match the raised exception.
We’ll look at the additional finally clause in a later sections.
The except statement can’t easily handle a list of exception classes. The Python 2 syntax for this is confusing because it requires some additional () around the list of exceptions.
except ( exception, ... ) 〈 , target 〉 :
The Python 3 syntax wil be slightly simpler. Using the keyword as will remove the need for the additional () around the list of exceptions.
except exception, ... as target
Overall Processing. The structure of the complete try statement summarizes the philosophy of exceptions. First, try the suite of statements, expecting them work. In the unlikely event that an exception is raised, find an exception clause and execute that exception clause suite to recover from or work around the exceptional situation.
Except clauses include some combination of error reporting, recovery or work-around. For example, a recovery-oriented except clause could delete useless files. A work-around exception clause could returning a complex result for square root of a negative number.
def avg( someList ): """Raises TypeError or ZeroDivisionError exceptions.""" sum= 0 for v in someList: sum = sum + v return float(sum)/len(someList) def avgReport( someList ): try: m= avg(someList) print "Average+15%=", m*1.15 except TypeError, ex: print "TypeError:", ex except ZeroDivisionError, ex: print "ZeroDivisionError:", ex
This example shows the avgReport() function; it contains a try clause that evaluates the avg() function. We expect that there will be a ZeroDivisionError exception if an empty list is provided to avg(). Also, a TypeError exception will be raised if the list has any non-numeric value. Otherwise, it prints the average of the values in the list.
In the try suite, we print the average. For certain kinds of inappropriate input, we will print the exceptions which were raised.
This design is generally how exception processing is handled. We have a relatively simple, clear function which attempts to do the job in a simple and clear way. We have a application-specific process which handles exceptions in a way that’s appropriate to the overall application.
Nested :command:`try` Statements. In more complex programs, you may have many function definitions. If more than one function has a try statement, the nested function evaluations will effectively nest the try statements inside each other.
This example shows a function solve(), which calls another function, quad(). Both of these functions have a try statement. An exception raised by quad() could wind up in an exception handler in solve().
def sum( someList ): """Raises TypeError""" sum= 0 for v in someList: sum = sum + v return sum def avg( someList ): """Raises TypeError or ZeroDivisionError exceptions.""" try: s= sum(someList) return float(s)/len(someList) except TypeError, ex: return "Non-Numeric Data" def avgReport( someList ): try: m= avg(someList) print "Average+15%=", m*1.15 except TypeError, ex: print "TypeError: ", ex except ZeroDivisionError, ex: print "ZeroDivisionError: ", ex
In this example, we have the same avgReport() function, which uses avg() to compute an average of a list. We’ve rewritten the avg() function to depend on a sum() function. Both avgReport() and avg() contain try statements. This creates a nested context for evaluation of exceptions.
Specifically, when the function sum is being evaluated, an exception will be examined by avg() first, then examined by avgReport(). For example, if sum() raises a TypeError exception, it will be handled by avg(); the avgReport() function will not see the TypeError exception.
Function Design. Note that this example has a subtle bug that illustrates an important point regarding function design. We introduced the bug when we defined avg() to return either an answer or an error status code in the form of a string. Generally, things are more complex when we try to mix return of valid results and return of error codes.
Status codes are the only way to report errors in languages that lack exceptions. C, for example, makes heavy use of status codes. The POSIX standard API definitions for operating system services are oriented toward C. A program making OS requests must examing the results to see if it is a proper values or an indication that an error occurred. Python, however, doesn’t have this limitation. Consequently many of the OS functions available in Python modules will raise exceptions rather than mix proper return values with status code values.
In our case, our design for avg() attepts to return either a valid numeric result or a string result. To be correct we would have to do two kinds of error checking in avgReport(). We would have to handle any exceptions and we would also have to examine the results of avg() to see if they are an error value or a proper answer.
Rather than return status codes, a better design is to simply use exceptions for all kinds of errors. IStatus codes have no real purposes in well-designed programs. In the next section, we’ll look at how to define and raise our own exceptions.
The raise statement does two things: it creates an exception object, and immediately leaves the expected program execution sequence to search the enclosing try statements for a matching except clause. The effect of a raise statement is to either divert execution in a matching except suite, or to stop the program because no matching except suite was found to handle the exception.
The Exception object created by raise can contain a message string that provides a meaningful error message. In addition to the string, it is relatively simple to attach additional attributes to the exception.
Here are the two forms for the raise satement.
raise exceptionClass , value
The first form of the raise statement uses an exception class name. The optional parameter is the additional value that will be contained in the exception. Generally, this is a string with a message, however any object can be provided.
Here’s an example of the raise statement.
raise ValueError, "oh dear me"
This statement raises the built-in exception ValueError with an amplifying string of "oh dear me". The amplifying string in this example, one might argue, is of no use to anybody. This is an important consideration in exception design. When using a built-in exception, be sure that the arguments provided pinpoint the error condition.
The second form of the raise statement uses an object constructor to create the Exception object.
raise ValueError( "oh dear me" )
Here’s a variation on the second form in which additional attributes are provided for the exception.
ex= MyNewError( "oh dear me" ) ex.myCode= 42 ex.myType= "O+" raise ex
In this case a handler can make use of the message, as well as the two additional attributes, myCode and myType.
Defining Your Own Exception. You will rarely have a need to raise a built-in exception. Most often, you will need to define an exception which is unique to your application.
We’ll cover this in more detail as part of the object oriented programming features of Python, in Classes . Here’s the short version of how to create your own unique exception class.
class MyError( Exception ): pass
This single statement defines a subclass of Exception named MyError. You can then raise MyError in a raise statement and check for MyError in except clauses.
Here’s an example of defining a unique exception and raising this exception with an amplifying string.
import math class QuadError( Exception ): pass def quad(a,b,c): if a == 0: ex= QuadError( "Not Quadratic" ) ex.coef= ( a, b, c ) raise ex if b*b-4*a*c < 0: ex= QuadError( "No Real Roots" ) ex.coef= ( a, b, c ) raise ex x1= (-b+math.sqrt(b*b-4*a*c))/(2*a) x2= (-b-math.sqrt(b*b-4*a*c))/(2*a) return (x1,x2)
Additional raise Statements. Exceptions can be raised anywhere, including in an except clause of a try statement. We’ll look at two examples of re-raising an exception.
We can use the simple raise statement in an except clause. This re-raises the original exception. We can use this to do standardized error handling. For example, we might write an error message to a log file, or we might have a standardized exception clean-up process.
try: attempt something risky except Exception, ex: log_the_error( ex ) raise
This shows how we might write the exception to a standard log in the function log_the_error() and then re-raise the original exception again. This allows the overall application to choose whether to stop running gracefully or handle the exception.
The other common technique is to transform Python errors into our application’s unique errors. Here’s an example that logs an error and transforms the built-in FloatingPointError into our application-specific error, MyError.
class MyError( Exception ): pass try: attempt something risky except FloatingPointError, e: do something locally, perhaps to clean up raise MyError("something risky failed: %s" % ( e, ) )
This allows us to have more consistent error messages, or to hide implementation details.
The following example uses a uniquely named exception to indicate that the user wishes to quit rather than supply input. We’ll define our own exception, and define function which rewrites a built-in exception to be our own exception.
We’ll define a function, ckyorn(), which does a “Check for Y or N”. This function has two parameters, prompt and help, that are used to prompt the user and print help if the user requests it. In this case, the return value is always a “Y” or “N”. A request for help (“?”) is handled automatically. A request to quit is treated as an exception, and leaves the normal execution flow. This function will accept “Q” or end-of-file (usually ctrl-D, but also ctrl-Z on Windows) as the quit signal.
class UserQuit( Exception ): pass def ckyorn( prompt, help="" ): ok= 0 while not ok: try: a=raw_input( prompt + " [y,n,q,?]: " ) except EOFError: raise UserQuit if a.upper() in [ 'Y', 'N', 'YES', 'NO' ]: ok= 1 if a.upper() in [ 'Q', 'QUIT' ]: raise UserQuit if a.upper() in [ '?' ]: print help return a.upper()
We can use this function as shown in the following example.
import interaction answer= interaction.ckyorn( help= "Enter Y if finished entering data", prompt= "All done?")
This function transforms an EOFError into a UserQuit exception, and also transforms a user entry of “Q” or “q” into this same exception. In a longer program, this exception permits a short-circuit of all further processing, omitting some potentially complex if statements.
Details of the ckyorn() Function Our function uses a loop that will terminate when we have successfully interpreted an answer from the user. We may get a request for help or perhaps some uninterpretable input from the user. We will continue our loop until we get something meaningful. The post condition will be that the variable ok is set to True and the answer, a is one of ("Y", "y", "N", "n").
Within the loop, we surround our raw_input() function with a try suite. This allows us to process any kind of input, including user inputs that raise exceptions. The most common example is the user entering the end-of-file character on their keyboard.
We handle the built-in EOFError by raising our UserQuit exception. When we get end-of-file from the user, we need to tidy up and exit the program promptly.
If no exception was raised, we examine the input character to see if we can interpret it. Note that if the user enters ‘Q’ or ‘QUIT’, we treat this exactly like as an end-of-file; we raise the UserQuit exception so that the program can tidy up and exit quickly.
We return a single-character result only for ordinary, valid user inputs. A user request to quit is considered extraordinary, and we raise an exception for that.
A common use case is to have some final processing that must occur irrespective of any exceptions that may arise. The situation usually arises when an external resource has been acquired and must be released. For example, a file must be closed, irrespective of any errors that occur while attempting to read it.
With some care, we can be sure that all exception clauses do the correct final processing. However, this may lead to a some redundant programming. The finally clause saves us the effort of trying to carefully repeat the same statement(s) in a number of except clauses. This final step will be performed before the try block is finished, either normally or by any exception.
The complete form of a try statement looks like this:
except exception , target : suite
Each suite is an indented block of statements. Any statement is allowed in the suite. While this means that you can have nested try statements, that is rarely necessary, since you can have an unlimited number of except clauses.
The finally clause is always executed. This includes all three possible cases: if the try block finishes with no exceptions; if an exception is raised and handled; and if an exception is raised but not handled. This last case means that every nested try statement with a finally clause will have that finally clause executed.
Use a finally clause to close files, release locks, close database connections, write final log messages, and other kinds of final operations. In the following example, we use the finally clause to write a final log message.
def avgReport( someList ): try: print "Start avgReport" m= avg(someList) print "Average+15%=", m*1.15 except TypeError, ex: print "TypeError: ", ex except ZeroDivisionError, ex: print "ZeroDivisionError: ", ex finally: print "Finish avgReport"
The sys module provides one function that provides the details of the exception that was raised. Programs with exception handling will occasionally use this function.
The sys.exc_info() function returns a 3- tuple with the exception, the exception’s parameter, and a traceback object that pinpoints the line of Python that raised the exception. This can be used something like the following not-very-good example.
import sys import math a= 2 b= 2 c= 1 try: x1= (-b+math.sqrt(b*b-4*a*c))/(2*a) x2= (-b-math.sqrt(b*b-4*a*c))/(2*a) print x1, x2 except: e,p,t= sys.exc_info() print e,p
This uses multiple assignment to capture the three elements of the sys.exc_info() tuple , the exception itself in e, the parameter in p and a Python traceback object in t.
This “catch-all” exception handler in this example is a bad policy. It may catch exceptions which are better left uncaught. We’ll look at these kinds of exceptions in Built-in Exceptions. For example, a RuntimeError is something you should not bother catching.
Exceptions have one interesting attribute. In the following example, we’ll assume we have an exception object named e. This would happen inside an except clause that looked like except SomeException, e:.
Traditionally, exceptions had a message attribute as well as an args attribute. These were used inconsistently.
When you create a new Exception instance, the argument values provided are loaded into the args attribute. If you provide a single value, this will also be available as message; this is a property name that references args.
Here’s an example where we provided multiple values as part of our Exception.
>>> a=Exception(1,2,3) >>> a.args (1, 2, 3) >>> a.message __main__:1: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 ''
Here’s an example where we provided a single value as part of our Exception; in this case, the message attribute is made available.
>>> b=Exception("Oh dear") >>> b.message 'Oh dear' >>> b.args ('Oh dear',)
The following exceptions are part of the Python environment. There are three broad categories of exceptions.
Here are the non-error exceptions. Generally, you will never have a handler for these, nor will you ever raise them with a raise statement.
Here are the errors which can be meaningfully handled when a program runs.
The following errors indicate serious problems with the Python interepreter. Generally, you can’t do anything if these errors should be raised.
The following exceptions are more typically returned at compile time, or indicate an extremely serious error in the basic construction of the program. While these exceptional conditions are a necessary part of the Python implementation, there’s little reason for a program to handle these errors.
The following exceptions are part of the implementation of exception objects. Normally, these never occur directly. These are generic categories of exceptions. When you use one of these names in a catch clause, a number of more more specialized exceptions will match these.
Input Helpers. There are a number of common character-mode input operations that can benefit from using exceptions to simplify error handling. All of these input operations are based around a loop that examines the results of raw_input and converts this to expected Python data.
All of these functions should accept a prompt, a default value and a help text. Some of these have additional parameters to qualify the list of valid responses.
All of these functions construct a prompt of the form:
your prompt [ valid input hints ,?,q]:
If the user enters a ?, the help text is displayed. If the user enters a q, an exception is raised that indicates that the user quit. Similarly, if the KeyboardInterrupt or any end-of-file exception is received, a user quit exception is raised from the exception handler.
Most of these functions have a similar algorithm.
User Input Function
Construct Prompt. Construct the prompt with the hints for valid values, plus ? and q.
While Not Valid Input. Loop until the user enters valid input.
Try the following suite of operations.
Prompt and Read. Use raw_input() to prompt for and read a reply from the user.
Help?. If the user entered “?”, provide the help message.
Quit?. If the user entered “q” or “Q”, raise a UserQuit exception.
Other. Try the following suite of operations
Convert. Attempt any conversion. Some inputs will involve numeric, or date-time conversions.
Validate. If necessary, do any validation checks checks. For some prompts, there will be a fixed list of valid answers. There may be a numeric range or a date range. For other prompts, there is no checking required.
If the input passes the validation, break out of the loop. This is our hoped-for answer.
In the event of an exception, the user input was invalid.
Nothing?. If the user entered nothing, and there is a default value, return the default value.
In the event of any other exceptions, this function should generally raise a UserQuit exception.
Result. Return the validated user input.
Functions to implement
|ckdate:||Prompts for and validates a date. The basic version would require dates have a specific format, for example mm/dd/yy. A more advanced version would accept a string to specify the format for the input. Much of this date validation is available in the time module, which will be covered in Dates and Times: the time and datetime Modules. This function not return bad dates or other invalid input.|
|ckint:||Display a prompt; verify and return an integer value|
|ckitem:||Build a menu; prompt for and return a menu item. A menu is a numbered list of alternative values, the user selects a value by entering the number. The function should accept a sequence of valid values, generate the numbers and return the actual menu item string. An additional help prompt of "??" should be accepted, this writes the help message and redisplays the menu.|
|ckkeywd:||Prompts for and validates a keyword from a list of keywords. This is similar to the menu, but the prompt is simply the list of keywords without numbers being added.|
|ckpath:||Display a prompt; verify and return a pathname. This can use the os.path module for information on construction of valid paths. This should use fstat to check the user input to confirm that it actually exists.|
|ckrange:||Prompts for and validates an integer in a given range. The range is given as separate values for the lowest allowed and highest allowed value. If either is not given, then that limit doesn’t apply. For instance, if only a lowest value is given, the valid input is greater than or equal to the lowest value. If only a highest value is given, the input must be less than or equal to the highest value.|
|ckstr:||Display a prompt; verify and return a string answer. This is similar to the basic raw_input(), except that it provides a simple help feature and raises exceptions when the user wants to quit.|
|cktime:||Display a prompt; verify and return a time of day. This is similar to ckdate; a more advanced version would use the time module to validate inputs. The basic version can simply accept a hh:mm:ss time string and validate it as a legal time.|
|ckyorn:||Prompts for and validates yes/no. This is similar to ckkeywd, except that it tolerates a number of variations on yes (YES, y, Y) and a number of variations on no (NO, n, N). It returns the canonical forms: Y or N irrespective of the input actually given.|
Built-in exceptions are all named with a leading upper-case letter. This makes them consistent with class names, which also begin with a leading upper-case letter.
Most modules or classes will have a single built-in exception, often called Error. This exception will be imported from a module, and can then be qualified by the module name. Modules and module qualification is covered in Components, Modules and Packages. It is not typical to have a complex hierarchy of exceptional conditions defined by a module.
Readers with experience in other programming languages may equate an exception with a kind of goto statement. It changes the normal course of execution to a (possibly hard to find) exception-handling suite. This is a correct description of the construct, which leads to some difficult decision-making.
Some exception-causing conditions are actually predictable states of the program. The notable exclusions are I/O Error, Memory Error and OS Error. These three depend on resources outside the direct control of the running program and Python interpreter. Exceptions like Zero Division Error or Value Error can be checked with simple, clear if statements. Exceptions like Attribute Error or Not Implemented Error should never occur in a program that is reasonably well written and tested.
Relying on exceptions for garden-variety errors – those that are easily spotted with careful design or testing – is often a sign of shoddy programming. The usual story is that the programmer received the exception during testing and simply added the exception processing try statement to work around the problem; the programmer made no effort to determine the actual cause or remediation for the exception.
In their defense, exceptions can simplify complex nested if statements. They can provide a clear “escape” from complex logic when an exceptional condition makes all of the complexity moot. Exceptions should be used sparingly, and only when they clarify or simplify exposition of the algorithm. A programmer should not expect the reader to search all over the program source for the relevant exception-handling clause.
Future examples, which use I/O and OS calls, will benefit from simple exception handling. However, exception laden programs are a problem to interpret. Exception clauses are relatively expensive, measured by the time spent to understand their intent. | http://www.itmaybeahack.com/homepage/books/python/html/p02/p02c07_exceptions.html | 13 |
69 | © 2004 – 2009 WELLOG
WAVES IN ELASTIC MEDIA:
Elasticity is that property of a substance that causes the substance to resist deformation and to recover its original shape when the deforming forces are removed. A medium that recovers its shape completely after being deformed is considered to be elastic. The earth is considered almost elastic for small displacements. Most of the theory used in acoustic logging is described mathematically using the theory of elastic waveforms in elastic media.
A force applied to a material is called stress (s). Stress is measured in terms of force per unit area. Stress can be compressional, expansional, or shear.
s = F/A
Where: F = force
A = Area
Stress in terms of young’s modulus is expressed as:
s = E/e
e = strain
Compressional stress occurs when a force is applied over one side of a body while it is supported by an equal force on the opposite side.
Tensile stress (pulling or stretching) is considered as a negative compressional stress.
An example of compression stress is the force of a weight drop used in seismic geophysics. A 100 pound weight is dropped from a height of 10 feet. The resulting compressional stress is 1000 ft. pounds.
Most of the phenomena in acoustic logging are related to strain. When an elastic material is subjected to stress, changes in physical dimension and shape called strain, occur.
Strain (e) = Elongation = change in dimension/original dimension
Hooke’s Law states that in an elastic medium with small strains, the strain is directly proportional to the stress that caused it. Elastic materials are referred to as exhibiting “Hookean” behavior.
Volume stress is defined as:
Volume stress = DF/A
Volume strain is defined as:
Volume strain = DV/A
Where: DV = volume
A = Area
Bulk Modulus (B) is defined as:
B = - volume stress/volume strain
Note: Bulk modulus can be obtained from acoustic velocity and bulk density logs.
B = r * Vp2 – (4 * m /3)
Note: A negative sign is used to indicate a decrease in volume due to compression.
Water is compressible. When subjected to 500 atm. Water is compressed 2 to 3 percent.
The inverse of bulk modulus is compressibility.
Compressibility = 1/B
The velocity of acoustic waves in a medium are approximately related to the square root of its elastic properties and inversely related to its inertial properties.
In gases or liquids:
V (approx.) = (elastic property/inertial property)1/2
V = B/r
Where: B = Bulk Modulus (an elastic property)
r = Density (an inertial property)
Velocity of compressional (P) waves in Rock materials:
Vp = ((B + 4/3 x S)/r)1/2
Where: S = shear modulus (defined below)
B = Bulk modulus (defined above)
r = Density
Velocity of shear (s) waves:
Vs = (S/r) 1/2
The velocity of shear waves is about .7 that of compressional waves.
Velocities in various rock types will be discussed later.
BULK MODULUS VALUES x1010 dynes per square centimeter
(from Guyod, Geophysical Well Logging, 1967)
Non-porous solids: Bulk Modulus:
Water saturated 5-20% porous rocks in situ:
Elastic materials are materials in which stress and stain are proportional to each other. If the stress is doubled, the strain is doubled. The ratio of the stress and stain in an object is referred to as elastic modulus or Young’s modulus.
Young’s modulus = E = tensile or compressive stress/tensile or compressive strain
Young’s modulus can be obtained from acoustic velocity and bulk density logs.
E = 2 * r * Vs2 * ( a – 1)
Vs = velocity of the shear wave
r = bulk density
a = Poisson’s ratio
Young’s Modulus chart.
Shear stress = shear force/A
Shear strain = Ds/L
Shear Modulus ( m ) is defined as:
m = shear stress/ shear strain
Shear modulus can be obtained from acoustic velocity and bulk density.
m = r * Vs2
Shear Modulus chart.
Poisson’s ratio may be considered as a measurement of the geometric change in shape due to extensional stress.
Poisson’s ratio ( s ) is defined as the ratio of relative increase or decrease in diameter to relative compression or elongation.
s = (Dd/d) / (Dl/l)
Where: d = diameter
l = length
Rock Density: Young’s Poisson’s Vp: Vs: Vp/Vs: Vs as %Vp:
Types: Gm/cc Modulus: Ratio: m/sec m/sec
Shale (AZ) 2.67 0.120 0.040 2124 1470 1.44 69.22%
Siltstone (CO) 2.50 0.130 0.120 2319 1524 1.52 65.71%
Limestone(AZ) 2.44 0.170 0.180 2750 1718 1.60 62.47%
Schist (MA) 2.70 0.544 0.181 4680 2921 1.60 62.41%
Note: Velocities are calculated from Density, Young’s modulus, and Poisson’s ratio.
Compressional (P) wave velocities: (m/sec)
Unconsolidated: Velocity: Consolidated: Velocity:
Weathered layer: 300 - 900 Granite: 5000 - 6000
Soil: 250 - 600 Basalt: 5400 - 6400
Alluvium: 500 – 000 Metamorphic Rocks: 3500 - 7000
Unsaturated Sand: 200 – 1000 Sandstone & Shale: 2000 - 4500
Saturated Sand: 800 – 2200 Limestone: 2000 - 6000
Sand & Water: 1400 - 1600
Unsaturated Gravel: 400 – 500 Air: 331.5
Saturated Gravel: 500 - 1500
Note: This is only Partial listing
Table 1 and Table 2 From: Press, Frank
(1966), Seismic velocities, in
Using the prior tables, it is possible to distinguish velocities of dry sediments from saturated sediments. It is further possible to distinguish sediments from rocks and igneous rocks from metamorphic rocks. | http://www.wellog.com/webinar/interp_p3_p2.htm | 13 |
73 | In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers or the computable reals.
Equivalent definitions can be given using μ-recursive functions, Turing machines or λ-calculus as the formal representation of algorithms. The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes.
Informal definition using a Turing machine as example [
In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936, i.e. as "sequences of digits interpreted as decimal fractions" between 0 and 1:
- "A computable number [is] one for which there is a Turing machine which, given n on its initial tape, terminates with the nth digit of that number [encoded on its tape]." (Minsky 1967:159)
The key notions in the definition are (1) that some n is specified at the start, (2) for any n the computation only takes a finite number of steps, after which the machine produces the desired output and terminates.
An alternate form of (2) – the machine successively prints all n of the digits on its tape, halting after printing the nth – emphasizes Minsky's observation: (3) That by use of a Turing machine, a finite definition – in the form of the machine's table – is being used to define what is a potentially-infinite string of decimal digits.
This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called the table-maker's dilemma whereas the modern definition is not.
Formal definition [
A real number a is computable if it can be approximated by some computable function in the following manner: given any integer , the function produces an integer k such that:
There are two similar definitions that are equivalent:
- There exists a computable function which, given any positive rational error bound , produces a rational number r such that
- There is a computable sequence of rational numbers converging to such that for each i.
There is another equivalent definition of computable numbers via computable Dedekind cuts. A computable Dedekind cut is a computable function which when provided with a rational number as input returns or , satisfying the following conditions:
An example is given by a program D that defines the cube root of 3. Assuming this is defined by:
A real number is computable if and only if there is a computable Dedekind cut D converging to it. The function D is unique for each irrational computable number (although of course two different programs may provide the same function).
A complex number is called computable if its real and imaginary parts are computable.
While the set of real numbers is uncountable, the set of computable numbers is only countable and thus almost all real numbers are not computable. The computable numbers can be counted by assigning a Gödel number to each Turing machine definition. This gives a function from the naturals to the computable reals. Although the computable numbers are an ordered field, the set of Gödel numbers corresponding to computable numbers is not itself computably enumerable, because it is not possible to effectively determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute a total function, but the corresponding decision problem is in Turing degree 0′′. Thus Cantor's diagonal argument cannot be used to produce uncountably many computable reals; at best, the reals formed from this method will be uncomputable.
The arithmetical operations on computable numbers are themselves computable in the sense that whenever real numbers a and b are computable then the following real numbers are also computable: a + b, a - b, ab, and a/b if b is nonzero. These operations are actually uniformly computable; for example, there is a Turing machine which on input (A,B,) produces output r, where A is the description of a Turing machine approximating a, B is the description of a Turing machine approximating b, and r is an approximation of a+b.
The computable real numbers do not share all the properties of the real numbers used in analysis. For example, the least upper bound of a bounded increasing computable sequence of computable real numbers need not be a computable real number (Bridges and Richman, 1987:58). A sequence with this property is known as a Specker sequence, as the first construction is due to E. Specker (1949). Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study of computable analysis.
The order relation on the computable numbers is not computable. There is no Turing machine which on input A (the description of a Turing machine approximating the number ) outputs "YES" if and "NO" if . The reason: suppose the machine described by A keeps outputting 0 as approximations. It is not clear how long to wait before deciding that the machine will never output an approximation which forces a to be positive. Thus the machine will eventually have to guess that the number will equal 0, in order to produce an output; the sequence may later become different from 0. This idea can be used to show that the machine is incorrect on some sequences if it computes a total function. A similar problem occurs when the computable reals are represented as Dedekind cuts. The same holds for the equality relation : the equality test is not computable.
While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable. That is, there is a program that takes an input two Turing machines A and B approximating numbers a and b, where a≠b, and outputs whether a<b or a>b. It is sufficient to use ε-approximations where ε<|b-a|/2; so by taking increasingly small ε (with a limit to 0), one eventually can decide whether a<b or a>b.
Every computable number is definable, but not vice versa. There are many definable, noncomputable real numbers, including:
Both of these examples in fact define an infinite set of definable, uncomputable numbers, one for each Universal Turing machine. A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable.
Every computable number is arithmetical.
The set of computable real numbers (as well as every countable, densely ordered subset of reals without ends) is order-isomorphic to the set of rational numbers.
Digit strings and the Cantor and Baire spaces [
Turing's original paper defined computable numbers as follows:
- A real number is computable if its digit sequence can be produced by some algorithm or Turing machine. The algorithm takes an integer as input and produces the -th digit of the real number's decimal expansion as output.
(Note that the decimal expansion of a only refers to the digits following the decimal point.)
Turing was aware that this definition is equivalent to the -approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the sense: if , then the first n digits of the decimal expansion for a provide an approximation of a. For the converse, we pick an computable real number a and generate increasingly precisce approximations until the nth digit after the decimal point is certain. This always generates a decimal expansion equal to a but it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion.
Unless certain topological properties of the real numbers are relevant it is often more convenient to deal with elements of (total 0,1 valued functions) instead of reals numbers in . The members of can be identified with binary decimal expansions but since the decimal expansions and denote the same real number the interval can only be bijectively (and homeomorphically under the subset topology) identified with the subset of not ending in all 1's.
Note that this property of decimal expansions means it's impossible to effectively identify computable real numbers defined in terms of a decimal expansion and those defined in the approximation sense. Hirst has shown there is no algorithm which takes as input the description of a Turing machine which produces approximations for the computable number a, and produces as output a Turing machine which enumerates the digits of a in the sense of Turing's definition (see Hirst 2007). Similarly it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers, in order to produce one digit it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location. This lack of uniformity is one reason that the contemporary definition of computable numbers uses approximations rather than decimal expansions.
However, from a computational or measure theoretic perspective the two structures and are essentially identical. and computability theorists often refer to members of as reals. While is totally disconnected for questions about classes or randomness it's much less messy to work in .
Elements of are sometimes called reals as well and though containing a homeomorphic image of in addition to being totally disconnected isn't even locally compact. This leads to genuine differences in the computational properties. For instance the satisfying with quatifier free must be computable while the unique satisfying a universal formula can be arbitrarily high in the hyperarithmetic hierarchy.
Can computable numbers be used instead of the reals? [
The computable numbers include many of the specific real numbers which appear in practice, including all real algebraic numbers, as well as e, , and many other transcendental numbers. Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics. This idea is appealing from a constructivist point of view, and has been pursued by what Bishop and Richman call the Russian school of constructive mathematics.
To actually develop analysis over computable numbers, some care must be taken. For example, if one uses the classical definition of a sequence, the set of computable numbers is not closed under the basic operation of taking the supremum of a bounded sequence (for example, consider a Specker sequence). This difficulty is addressed by considering only sequences which have a computable modulus of convergence. The resulting mathematical theory is called computable analysis.
There are some computer packages that work with computable real numbers, representing the real numbers as programs computing approximations. One example is the RealLib package (reallib home page).
See also [
- Oliver Aberth 1968, Analysis in the Computable Number Field, Journal of the Association for Computing Machinery (JACM), vol 15, iss 2, pp 276–299. This paper describes the development of the calculus over the computable number field.
- Errett Bishop and Douglas Bridges, Constructive Analysis, Springer, 1985, ISBN 0-387-15066-8
- Douglas Bridges and Fred Richman. Varieties of Constructive Mathematics, Oxford, 1987.
- Jeffry L. Hirst, Representations of reals in reverse mathematics, Bulletin of the Polish Academy of Sciences, Mathematics, 55, (2007) 303–316.
- Marvin Minsky 1967, Computation: Finite and Infinite Machines, Prentice-Hall, Inc. Englewood Cliffs, NJ. No ISBN. Library of Congress Card Catalog No. 67-12342. His chapter §9 "The Computable Real Numbers" expands on the topics of this article.
- E. Specker, "Nicht konstruktiv beweisbare Sätze der Analysis" J. Symbol. Logic, 14 (1949) pp. 145–158
- Turing, A.M. (1936), "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 2 (1937) 42 (1): 230–65, doi:10.1112/plms/s2-42.1.230 (and Turing, A.M. (1938), "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction", Proceedings of the London Mathematical Society, 2 (1937) 43 (6): 544–6, doi:10.1112/plms/s2-43.6.544). Computable numbers (and Turing's a-machines) were introduced in this paper; the definition of computable numbers uses infinite decimal sequences.
- Klaus Weihrauch 2000, Computable analysis, Texts in theoretical computer science, Springer, ISBN 3-540-66817-9. §1.3.2 introduces the definition by nested sequences of intervals converging to the singleton real. Other representations are discussed in §4.1.
- Klaus Weihrauch, A simple introduction to computable analysis
Computable numbers were defined independently by Turing, Post and Church. See The Undecidable, ed. Martin Davis, for further original papers. | http://www.algebra.com/algebra/homework/real-numbers/Computable_number.wikipedia | 13 |
84 | Inertia is the resistance of any physical object to a change in its state of motion. It is represented numerically by an object's mass. The principle of inertia is one of the fundamental principles of classical physics which are used to describe the motion of matter and how it is affected by applied forces. Inertia comes from the Latin word, "iners", meaning idle, or lazy. Sir Isaac Newton defined inertia in Definition 3 of his Philosophiæ Naturalis Principia Mathematica, which states:
The vis insita, or innate force of matter is a power of resisting, by which every body, as much as in it lies, endeavors to preserve in its present state, whether it be of rest, or of moving uniformly forward in a straight line.
In common usage, however, people may also use the term "inertia" to refer to an object's "amount of resistance to change in velocity" (which is quantified by its mass), or sometimes to its momentum, depending on the context (e.g. "this object has a lot of inertia"). The term "inertia" is more properly understood as shorthand for "the principle of inertia" as described by Newton in his First Law of Motion. This law, expressed simply, says that an object that is not subject to any net external force moves at a constant velocity. In even simpler terms, inertia means that an object will always continue moving at its current speed and in its current direction until some force causes its speed or direction to change. This would include an object that is not in motion (velocity = zero), which will remain at rest until some force causes it to move.
On the surface of the Earth the nature of inertia is often masked by the effects of friction, which generally tends to decrease the speed of moving objects (often even to the point of rest), and by the acceleration due to gravity. The effects of these two forces misled classical theorists such as Aristotle, who believed that objects would move only as long as force was being applied to them.
Prior to the Renaissance in the 15th century, the generally accepted theory of motion in Western philosophy was that proposed by Aristotle (around 335 BC to 322 BC), which stated that in the absence of an external motive power, all objects (on earth) would naturally come to rest in a state of no movement, and that moving objects only continue to move so long as there is a power inducing them to do so. Aristotle explained the continued motion of projectiles, which are separated from their projector, by the action of the surrounding medium which continues to move the projectile in some way. As a consequence, Aristotle concluded that such violent motion in a void was impossible for there would be nothing there to keep the body in motion against the resistance of its own gravity. Then in a statement regarded by Newton as expressing his Principia's first law of motion, Aristotle continued by asserting that a body in (non-violent) motion in a void would continue moving forever if externally unimpeded:
Despite its remarkable success and general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over the nearly 2 millennia of its reign. For example, Lucretius (following, presumably, Epicurus) clearly stated that the 'default state' of matter was motion, not stasis. In the 6th century, John Philoponus criticized Aristotle's view, noting the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of the surrounding medium but by some property implanted in the object when it was set in motion. This was not the modern concept of inertia, for there was still the need for a power to keep a body in motion. This view was strongly opposed by Averroes and many scholastic philosophers who supported Aristotle. However this view did not go unchallenged in the Islamic world, where Philoponus did have several supporters who further developed his ideas.
Mozi (Chinese: 墨子; pinyin: Mòzǐ; ca. 470 BCE–ca. 390 BCE), a philosopher who lived in China during the Hundred Schools of Thought period (early Warring States Period), composed or collected his thought in the book Mozi, which contains the following sentence: "The cessation of motion is due to the opposing force ... If there is no opposing force ... the motion will never stop." According to Joseph Needham, this a precursor to Newton's first law of motion.
Several Muslim scientists from the medieval Islamic world wrote Arabic treatises on theories of motion. In the early 11th century, the Arabic scientist Ibn al-Haytham (Arabic: ابن الهيثم) (Latinized as Alhacen) hypothesized that an object will move perpetually unless a force causes it to stop or change direction. Alhacen's model of motion thus bears resemblance to the law of inertia (now known as Newton's first law of motion) later stated by Galileo Galilei in the 16th century.
Alhacen's contemporary, the Persian scientist Ibn Sina (Latinized as Avicenna) developed an elaborate theory of motion, in which he made a distinction between the inclination and force of a projectile, and concluded that motion was a result of an inclination (mayl) transferred to the projectile by the thrower, and that projectile motion in a vacuum would not cease. He viewed inclination as a permanent force whose effect is dissipated by external forces such as air resistance. Avicenna also referred to mayl to as being proportional to weight times velocity, which was similar to Newton's theory of momentum. Avicenna's concept of mayl was later used in Jean Buridan's theory of impetus.
Abū Rayhān al-Bīrūnī (973-1048) was the first physicist to realize that acceleration is connected with non-uniform motion. The first scientist to reject Aristotle's idea that a constant force produces uniform motion was the Arabic Muslim physicist and philosopher Hibat Allah Abu'l-Barakat al-Baghdaadi in the early 12th century. He was the first to argue that a force applied continuously produces acceleration, which is considered "the fundamental law of classical mechanics", and vaguely foreshadows Newton's second law of motion.
In the early 16th century, al-Birjandi, in his analysis on the Earth's rotation, developed a hypothesis similar to Galileo's notion of "circular inertia", which he described in the following observational test:
"The small or large rock will fall to the Earth along the path of a line that is perpendicular to the plane (sath) of the horizon; this is witnessed by experience (tajriba). And this perpendicular is away from the tangent point of the Earth’s sphere and the plane of the perceived (hissi) horizon. This point moves with the motion of the Earth and thus there will be no difference in place of fall of the two rocks."
In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus, dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also maintained that impetus could be not only linear, but also circular in nature, causing objects (such as celestial bodies) to move in a circle.
Buridan's thought was followed up by his pupil Albert of Saxony (1316–1390) and the Oxford Calculators, who performed various experiments that further undermined the classical, Aristotelian view. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs.
Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone:
"…[Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path."
Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion.
The law of inertia states that it is the tendency of an object to resist a change in motion. According to Newton's words, an object will stay at rest and or stay in motion unless acted on by a net external force, whether it results from gravity, friction, contact, or some other source. The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the earth (and everything on it) was in fact never "at rest", but was actually in constant motion around the sun. Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle:
A body moving on a level surface will continue in the same direction at a constant speed unless disturbed.
It is also worth noting that Galileo later went on to conclude that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Einstein to develop the theory of Special Relativity.
Galileo's concept of inertia would later come to be refined and codified by Isaac Newton as the first of his Laws of Motion (first published in Newton's work, Philosophiae Naturalis Principia Mathematica, in 1687):
Unless acted upon by a net unbalanced force, an object will maintain a constant velocity.
Note that "velocity" in this context is defined as a vector, thus Newton's "constant velocity" implies both constant speed and constant direction (and also includes the case of zero speed, or no motion). Since initial publication, Newton's Laws of Motion (and by extension this first law) have come to form the basis for the almost universally accepted branch of physics now termed classical mechanics.
The actual term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1618–1621); however, the meaning of Kepler's term (which he derived from the Latin word for "idleness" or "laziness") was not quite the same as its modern interpretation. Kepler defined inertia only in terms of a resistance to movement, once again based on the presumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to these concepts as it is today.
Nevertheless, despite defining the concept so elegantly in his laws of motion, even Newton did not actually use the term "inertia" to refer to his First Law. In fact, Newton originally viewed the phenomenon he described in his First Law of Motion as being caused by "innate forces" inherent in matter, which resisted any acceleration. Given this perspective, and borrowing from Kepler, Newton actually attributed the term "inertia" to mean "the innate force possessed by an object which resists changes in motion"; thus Newton defined "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself. However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one which we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon described by Newton's First Law of Motion, and the two concepts are now basically equivalent.
Albert Einstein's theory of Special Relativity, as proposed in his 1905 paper, "On the Electrodynamics of Moving Bodies," was built on the understanding of inertia and inertial reference frames developed by Galileo and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass, energy, and distance, Einstein's concept of inertia remained unchanged from Newton's original meaning (in fact the entire theory was based on Newton's definition of inertia). However, this resulted in a limitation inherent in Special Relativity that the principle of relativity could only apply to reference frames that were inertial in nature (meaning when no acceleration was present). In an attempt to address this limitation, Einstein proceeded to develop his theory of General Relativity ("The Foundation of the General Theory of Relativity," 1916), which ultimately provided a unified theory for both inertial and noninertial (accelerated) reference frames. However, in order to accomplish this, in General Relativity Einstein found it necessary to redefine several fundamental aspects of the universe (such as gravity) in terms of a new concept of "curvature" of spacetime, instead of the more traditional system of forces understood by Newton.
As a result of this redefinition, Einstein also redefined the concept of "inertia" in terms of geodesic deviation instead, with some subtle but significant additional implications. The result of this is that according to General Relativity, when dealing with very large scales, the traditional Newtonian idea of "inertia" does not actually apply, and cannot necessarily be relied upon. Luckily, for sufficiently small regions of spacetime, the Special Theory can still be used, in which inertia still means the same (and works the same) as in the classical model.
Another profound, perhaps the most well-known, conclusion of the theory of Special Relativity was that energy and mass are not separate things, but are, in fact, interchangeable. This new relationship, however, also carried with it new implications for the concept of inertia. The logical conclusion of Special Relativity was that if mass exhibits the principle of inertia, then inertia must also apply to energy as well. This theory, and subsequent experiments confirming some of its conclusions, have also served to radically expand the definition of inertia in some contexts to apply to a much wider context including energy as well as matter.
According to Isaac Asimov in "Understanding Physics": "This tendency for motion (or for rest) to maintain itself steadily unless made to do otherwise by some interfering force can be viewed as a kind of 'laziness', a kind of unwillingness to make a change." And indeed, Newton's laws of motion, as Isaac Asimov goes on to explain, "represent assumptions and definitions and are not subject to proof. In particular, the notion of 'inertia' is as much an assumption as Aristotle's notion of 'natural place'.... To be sure, the new relativistic view of the universe advanced by Einstein makes it plain that in some respects Newton's laws of motion are only approximations.... At ordinary velocities and distance, however, the approximations are extremely good."
Physics and mathematics appear to be less inclined to use the original concept of inertia as "a tendency to maintain momentum" and instead favor the mathematically useful definition of inertia as the measure of a body's resistance to changes in momentum or simply a body's inertial mass.
This was clear in the beginning of the 20th century, when the theory of relativity was not yet created. Mass, m, denoted something like amount of substance or quantity of matter. And at the same time mass was the quantitative measure of inertia of a body.
The mass of a body determines the momentum P of the body at given velocity v; it is a proportionality factor in the formula:
The factor m is referred to as inertial mass.
But mass as related to 'inertia' of a body can be defined also by the formula:
Here, F is force, m is mass, and a is acceleration.
By this formula, the greater its mass, the less a body accelerates under given force. Masses m defined by the formula (1) and (2) are equal because the formula (2) is a consequence of the formula (1) if mass does not depend on time and speed. Thus, "mass is the quantitative or numerical measure of body’s inertia, that is of its resistance to being accelerated".
This meaning of a body's inertia therefore is altered from the original meaning as "a tendency to maintain momentum" to a description of the measure of how difficult it is to change the momentum of a body.
The only difference there appears to be between inertial mass and gravitational mass is the method used to determine them.
Gravitational mass is measured by comparing the force of gravity of an unknown mass to the force of gravity of a known mass. This is typically done with some sort of balance scale. The beauty of this method is that no matter where, or on what planet you are, the masses will always balance out because the gravitational acceleration on each object will be the same. This does break down near supermassive objects such as black holes and neutron stars due to the steep gradient of the gravitational field around such objects.
Inertial mass is found by applying a known force to an unknown mass, measuring the acceleration, and applying Newton's Second Law, m = F/a. This gives an accurate value for mass, limited only by the accuracy of the measurements. When astronauts need to be weighed in the weightlessness of free fall, they actually find their inertial mass in a special chair.
The interesting thing is that, physically, no difference has been found between gravitational and inertial mass. Many experiments have been performed to check the values and the experiments always agree to within the margin of error for the experiment. Einstein used the fact that gravitational and inertial mass were equal to begin his Theory of General Relativity in which he postulated that gravitational mass was the same as inertial mass, and that the acceleration of gravity is a result of a 'valley' or slope in the space-time continuum that masses 'fell down' much as pennies spiral around a hole in the common donation toy at a chain store. Dennis Sciama later showed that the reaction force produced by the combined gravity of all matter in the universe upon an accelerating object is mathematically equal to the object's inertia , but this would only be a workable physical explanation if the gravitational effects operated instantaneously.
In a location such as a steadily moving railway carriage, a dropped ball (as seen by an observer in the carriage) would behave as it would if it were dropped in a stationary carriage. The ball would simply descend vertically. It is possible to ignore the motion of the carriage by defining it as an inertial frame. In a moving but non-accelerating frame, the ball behaves normally because the train and its contents continue to move at a constant velocity. Before being dropped, the ball was traveling with the train at the same speed, and the ball's inertia ensured that it continued to move in the same speed and direction as the train, even while dropping. Note that, here, it is inertia which ensured that, not its mass.
In an inertial frame all the observers in uniform (non-accelerating) motion will observe the same laws of physics. However observers in another inertial frame can make a simple, and intuitively obvious, transformation (the Galilean transformation), to convert their observations. Thus, an observer from outside the moving train could deduce that the dropped ball within the carriage fell vertically downwards.
However, in frames which are experiencing acceleration (non-inertial frames), objects appear to be affected by fictitious forces. For example, if the railway carriage was accelerating, the ball would not fall vertically within the carriage but would appear to an observer to be deflected because the carriage and the ball would not be traveling at the same speed while the ball was falling. Other examples of fictitious forces occur in rotating frames such as the earth. For example, a missile at the North Pole could be aimed directly at a location and fired southwards. An observer would see it apparently deflected away from its target by a force (the Coriolis force) but in reality the southerly target has moved because earth has rotated while the missile is in flight. Because the earth is rotating, a useful inertial frame of reference is defined by the stars, which only move imperceptibly during most observations.The law of inertia is also known as Isaac Newton's first law of motion.
There is no single accepted theory that explains the source of Inertia. Various efforts by notable physicists such as Ernst Mach (see Mach's principle), Albert Einstein, D Sciama, and Bernard Haisch have all run into significant criticisms from more recent theorists. For a recent treatment of the issue see: "Relativity and the Nature of Spacetime", Chapter 9, by Vesselin Petkov, 2nd ed. 2009.
Another form of inertia is rotational inertia (→ moment of inertia), which refers to the fact that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum is unchanged, unless an external torque is applied; this is also called conservation of angular momentum. Rotational inertia often has hidden practical consequences. It also depends on the object remaining structurally intact as a rigid body.
"It was a permanent force whose effect got dissipated only as a result of external agents such as air resistance. He is apparently the first to conceive such a permanent type of impressed virtue for non-natural motion."
"Thus he considered impetus as proportional to weight times velocity. In other words, his conception of impetus comes very close to the concept of momentum of Newtonian mechanics."
Declension of inertia (type kulkija)
Inertia is the quality of an object to keep the same velocity (speed) unless it is acted upon by an outside force. Inertia is also called Sir Isaac Newton's First Law of Motion. The First Law of Motion says that:
Every body perseveres in its state of being at rest or of moving uniformly straight ahead, except insofar as it is compelled to change its state by forces impressed. [Cohen & Whitman 1999 translation]
This basically means:
Every object stays at rest or stays moving at the same speed unless something makes it change. | http://www.thefullwiki.org/Inertia | 13 |
54 | In economics, a cost curve is a graph of the costs of production as a function of total quantity produced. In a free market economy, productively efficient firms use these curves to find the optimal point of production (minimizing cost), and profit maximizing firms can use them to decide output quantities to achieve those aims. There are various types of cost curves, all related to each other, including total and average cost curves, and marginal ("for each additional unit") cost curves, which are equal to the differential of the total cost curves. Some are applicable to the short run, others to the long run.
Short-run average variable cost curve (SRAVC)
Average variable cost (which is a short-run concept) is the variable cost (typically labor cost) per unit of output: SRAVC = wL / Q where w is the wage rate, L is the quantity of labor used, and Q is the quantity of output produced. The SRAVC curve plots the short-run average variable cost against the level of output, and is typically drawn as U-shaped.
Short-run average total cost curve (SRATC or SRAC)
The average total cost curve is constructed to capture the relation between cost per unit of output and the level of output, ceteris paribus. A perfectly competitive and productively efficient firm organizes its factors of production in such a way that the average cost of production is at the lowest point. In the short run, when at least one factor of production is fixed, this occurs at the output level where it has enjoyed all possible average cost gains from increasing production. This is at the minimum point in the diagram on the right.
Short-run total cost is given by
- STC = PKK+PLL,
where PK is the unit price of using physical capital per unit time, PL is the unit price of labor per unit time (the wage rate), K is the quantity of physical capital used, and L is the quantity of labor used. From this we obtain short-run average cost, denoted either SATC or SAC, as STC / Q:
- SRATC or SRAC = PKK/Q + PLL/Q = PK / APK + PL / APL,
where APK = Q/K is the average product of capital and APL = Q/L is the average product of labor.:191
Short run average cost equals average fixed costs plus average variable costs. Average fixed cost continuously falls as production increases in the short run, because K is fixed in the short run. The shape of the average variable cost curve is directly determined by increasing and then diminishing marginal returns to the variable input (conventionally labor).:210
Long-run average cost curve (LRAC)
The long-run average cost curve depicts the cost per unit of output in the long run—that is, when all productive inputs' usage levels can be varied. All points on the line represent least-cost factor combinations; points above the line are attainable but unwise, while points below are unattainable given present factors of production. The behavioral assumption underlying the curve is that the producer will select the combination of inputs that will produce a given output at the lowest possible cost. Given that LRAC is an average quantity, one must not confuse it with the long-run marginal cost curve, which is the cost of one more unit.:232 The LRAC curve is created as an envelope of an infinite number of short-run average total cost curves, each based on a particular fixed level of capital usage.:235 The typical LRAC curve is U-shaped, reflecting increasing returns of scale where negatively-sloped, constant returns to scale where horizontal and decreasing returns (due to increases in factor prices) where positively sloped.:234 Contrary to Viner, the envelope is not created by the minimum point of each short-run average cost curve.:235 This mistake is recognized as Viner's Error.
In a long-run perfectly competitive environment, the equilibrium level of output corresponds to the minimum efficient scale, marked as Q2 in the diagram. This is due to the zero-profit requirement of a perfectly competitive equilibrium. This result, which implies production is at a level corresponding to the lowest possible average cost,:259 does not imply that production levels other than that at the minimum point are not efficient. All points along the LRAC are productively efficient, by definition, but not all are equilibrium points in a long-run perfectly competitive environment.
In some industries, the bottom of the LRAC curve is large in comparison to market size (that is to say, for all intents and purposes, it is always declining and economies of scale exist indefinitely). This means that the largest firm tends to have a cost advantage, and the industry tends naturally to become a monopoly, and hence is called a natural monopoly. Natural monopolies tend to exist in industries with high capital costs in relation to variable costs, such as water supply and electricity supply.:312
Short-run marginal cost curve (SRMC)
A short-run marginal cost curve graphically represents the relation between marginal (i.e., incremental) cost incurred by a firm in the short-run production of a good or service and the quantity of output produced. This curve is constructed to capture the relation between marginal cost and the level of output, holding other variables, like technology and resource prices, constant. The marginal cost curve is U-shaped. Marginal cost is relatively high at small quantities of output; then as production increases, marginal cost declines, reaches a minimum value, then rises. The marginal cost is shown in relation to marginal revenue (MR), the incremental amount of sales revenue that an additional unit of the product or service will bring to the firm. This shape of the marginal cost curve is directly attributable to increasing, then decreasing marginal returns (and the law of diminishing marginal returns). Marginal cost equals w/MPL.:191 For most production processes the marginal product of labor initially rises, reaches a maximum value and then continuously falls as production increases. Thus marginal cost initially falls, reaches a minimum value and then increases.:209 The marginal cost curve intersects both the average variable cost curve and (short-run) average total cost curve at their minimum points. When the marginal cost curve is above an average cost curve the average curve is rising. When the marginal costs curve is below an average curve the average curve is falling. This relation holds regardless of whether the marginal curve is rising or falling.:226
Long-run marginal cost curve (LRMC)
The long-run marginal cost curve shows for each unit of output the added total cost incurred in the long run, that is, the conceptual period when all factors of production are variable so as minimize long-run average total cost. Stated otherwise, LRMC is the minimum increase in total cost associated with an increase of one unit of output when all inputs are variable.
The long-run marginal cost curve is shaped by return to scale, a long-run concept, rather than the law of diminishing marginal returns, which is a short-run concept. The long-run marginal cost curve tends to be flatter than its short-run counterpart due to increased input flexibility as to cost minimization. The long-run marginal cost curve intersects the long-run average cost curve at the minimum point of the latter.:208 When long-run marginal costs are below long-run average costs, long-run average costs are falling (as to additional units of output).:207 When long-run marginal costs are above long run average costs, average costs are rising. Long-run marginal cost equals short run marginal-cost at the least-long-run-average-cost level of production. LRMC is the slope of the LR total-cost function.
Graphing cost curves together
Cost curves can be combined to provide information about firms. In this diagram for example, firms are assumed to be in a perfectly competitive market. In a perfectly competitive market the price that firms are faced with would be the price at which the marginal cost curve cuts the average cost curve.
Cost curves and production functions
Assuming that factor prices are constant, the production function determines all cost functions. The variable cost curve is the inverted short-run production function or total product curve and its behavior and properties are determined by the production function.:209 [nb 1] Because the production function determines the variable cost function it necessarily determines the shape and properties of marginal cost curve and the average cost curves.
If the firm is a perfect competitor in all input markets, and thus the per-unit prices of all its inputs are unaffected by how much of the inputs the firm purchases, then it can be shown that at a particular level of output, the firm has economies of scale (i.e., is operating in a downward sloping region of the long-run average cost curve) if and only if it has increasing returns to scale. Likewise, it has diseconomies of scale (is operating in an upward sloping region of the long-run average cost curve) if and only if it has decreasing returns to scale, and has neither economies nor diseconomies of scale if it has constant returns to scale. In this case, with perfect competition in the output market the long-run market equilibrium will involve all firms operating at the minimum point of their long-run average cost curves (i.e., at the borderline between economies and diseconomies of scale).
If, however, the firm is not a perfect competitor in the input markets, then the above conclusions are modified. For example, if there are increasing returns to scale in some range of output levels, but the firm is so big in one or more input markets that increasing its purchases of an input drives up the input's per-unit cost, then the firm could have diseconomies of scale in that range of output levels. Conversely, if the firm is able to get bulk discounts of an input, then it could have economies of scale in some range of output levels even if it has decreasing returns in production in that output range.
Relationship between different curves
- Total Cost = Fixed Costs (FC) + Variable Costs (VC)
- Marginal Cost (MC) = dC/dQ; MC equals the slope of the total cost function and of the variable cost function
- Average Total Cost (ATC) = Total Cost/Q
- Average Fixed Cost (AFC) = FC/Q
- Average Variable Cost = VC/Q.
- ATC = AFC + AVC
- The MC curve is related to the shape of the ATC and AVC curves::212
- At a level of Q at which the MC curve is above the average total cost or average variable cost curve, the latter curve is rising.:212
- If MC is below average total cost or average variable cost, then the latter curve is falling.
- If MC equals average total cost, then average total cost is at its minimum value.
- If MC equals average variable cost, then average variable cost is at its minimum value.
Relationship between short run and long run cost curves
Basic: For each quantity of output there is one cost minimizing level of capital and a unique short run average cost curve associated with producing the given quantity.
- Each STC curve can be tangent to the LRTC curve at only one point. The STC curve cannot cross (intersect) the LRTC curve.:230:228-229 The STC curve can lie wholly “above” the LRTC curve with no tangency point.:256
- One STC curve is tangent to LRTC at the long-run cost minimizing level of production. At the point of tangency LRTC = STC. At all other levels of production STC will exceed LRTC.:292-299
- Average cost functions are the total cost function divided by the level of output. Therefore the SATC curveis also tangent to the LRATC curve at the cost-minimizing level of output. At the point of tangency LRATC = SATC. At all other levels of production SATC > LRATC:292-299 To the left of the point of tangency the firm is using too much capital and fixed costs are too high. To the right of the point of tangency the firm is using too little capital and diminishing returns to labor are causing costs to increase.
- The slope of the total cost curves equals marginal cost. Therefore when STC is tangent to LTC, SMC = LRMC.
- At the long run cost minimizing level of output LRTC = STC; LRATC = SATC and LRMC = SMC, :292-299.
- The long run cost minimizing level of output may be different from minimum SATC,:229 :186.
- With fixed unit costs of inputs, if the production function has constant returns to scale, then at the minimal level of the SATC curve we have SATC = LRATC = SMC = LRMC.:292-299
- With fixed unit costs of inputs, if the production function has increasing returns to scale, the minimum of the SATC curve is to the right of the point of tangency between the LRAC and the SATC curves.:292-299 Where LRTC = STC, LRATC = SATC and LRMC = SMC.
- With fixed unit costs of inputs and decreasing returns the minimum of the SATC curve is to the left of the point of tangency between LRAC and SATC.:292-299 Where LRTC = STC, LRATC = SATC and LRMC = SMC.
- With fixed unit input costs, a firm that is experiencing increasing (decreasing) returns to scale and is producing at its minimum SAC can always reduce average cost in the long run by expanding (reducing) the use of the fixed input.:292-99 :186
- LRATC will always equal to or be less than SATC.:211
- If production process is exhibiting constant returns to scale then minimum SRAC equals minimum long run average cost. The LRAC and SRAC intersect at their common minimum values. Thus under constant returns to scale SRMC = LRMC = LRAC = SRAC .
- If the production process is experiencing decreasing or increasing, minimum short run average cost does not equal minimum long run average cost. If increasing returns to scale exist long run minimum will occur at a lower level of output than SRAC. This is because there are economies of scale that have not been exploited so in the long run a firm could always produce a quantity at a price lower than minimum short run aveage cost simply by using a larger plant.
- With decreasing returns, minimum SRAC occurs at a lower production level than minimum LRAC because a firm could reduce average costs by simply decreasing the size or its operations.
- The minimum of a SRAC occurs when the slope is zero. Thus the points of tangency between the U-shaped LRAC curve and the minimum of the SRAC curve would coincide only with that portion of the LRAC curve exhibiting constant economies of scale. For increasing returns to scale the point of tangency between the LRAC and the SRAc would have to occur at a level of output below level associated with the minimum of the SRAC curve.
These statements assume that the firm is using the optimal level of capital for the quantity produced. If not, then the SRAC curve would lie "wholly above" the LRAC and would not be tangent at any point.
Both the SRAC and LRAC curves are typically expressed as U-shaped.:211; 226 :182;187-188 However, the shapes of the curves are not due to the same factors. For the short run curve the initial downward slope is largely due to declining average fixed costs.:227 Increasing returns to the variable input at low levels of production also play a role, while the upward slope is due to diminishing marginal returns to the variable input.:227 With the long run curve the shape by definition reflects economies and diseconomies of scale.:186 At low levels of production long run production functions generally exhibit increasing returns to scale, which, for firms that are perfect competitors in input markets, means that the long run average cost is falling;:227 the upward slope of the long run average cost function at higher levels of output is due to decreasing returns to scale at those output levels.:227
- Economic cost
- General equilibrium
- Joel Dean (economist)
- Partial equilibrium
- Point of total assumption
- The slope of the short-run production function equals the marginal product of the variable input, conventionally labor. The slope of the variable cost function is marginal costs. The relationship between MC and the marginal product of labor MPL is MC = w/MPL. Because the wage rate w is assumed to be constant the shape of the variable cost curve is completely dependent on the marginal product of labor. The short-run total cost curve is simply the variable cost curve plus fixed costs.
- Perloff, J. Microeconomics, 5th ed. Pearson, 2009.
- Perloff, J., 2008, Microeconomics: Theory & Applications with Calculus, Pearson. ISBN 978-0-321-27794-7
- Lipsey, Richard G. (1975). An introduction to positive economics (fourth ed.). Weidenfeld & Nicolson. pp. 57–8. ISBN 0-297-76899-9.
- Sexton, Robert L., Philip E. Graves, and Dwight R. Lee, 1993. "The Short- and Long-Run Marginal Cost Curve: A Pedagogical Note", Journal of Economic Education, 24(1), p. 34. [Pp. 34-37 (press +)].
- Gelles, Gregory M., and Mitchell, Douglas W., "Returns to scale and economies of scale: Further observations," Journal of Economic Education 27, Summer 1996, 259-261.
- Frisch, R., Theory of Production, Drodrecht: D. Reidel, 1965.
- Ferguson, C. E., The Neoclassical Theory of Production and Distribution, London: Cambridge Univ. Press, 1969.
- Pindyck, R., and Rubinfeld, D., Microeconomics, 5th ed., Prentice-Hall, 2001.
- Nicholson: Microeconomic Theory 9th ed. Page 238 Thomson 2005
- Kreps, D., A Course in Microeconomic Theory, Princeton Univ. Press, 1990.
- Binger, B., and Hoffman, E., Microeconomics with Calculus, 2nd ed., Addison-Wesley, 1998.
- Frank, R., Microeconomics and Behavior 7th ed. (Mc-Graw-Hill) ISBN 978-0-07-126349-8 at 321.
- Melvin & Boyes, Microeconomics, 5th ed., Houghton Mifflin, 2002
- Perloff, J. Microeconomics Theory & Application with Calculus Pearson (2008) p. 231.
- Nicholson: Microeconomic Theory 9th ed. Page Thomson 2005
- Boyes, W., The New Managerial Economics, Houghton Mifflin, 2004. | http://en.wikipedia.org/wiki/Cost_curve | 13 |
112 | |Jump to:||Ternary Numbers||Bivalent Logic||Bivalent Modal Logic||Trivalent Logic||Trivalent Operators||Trivalent Modal Logic|
One of the most interesting aspects of Shwa is trivalent logic. The only natural language with trivalent logic, as far as I know, is Aymará, a relative of Quechua spoken on the altiplano of Bolivia (see www.aymara.org) whose trivalent logic was elucidated by Iván Guzmán de Rojas. However, Aymará only has trivalent modal logic, while Shwa logic is completely trivalent.
This chapter will present trivalent logic in more depth than you need to use it, because it's interesting. In preparation, we'll recap ternary numbers and bivalent logic for you in the first three sections.
Many people are familiar with binary, or base 2 numbers, which are the basis for both digital computing and classical (Boolean or Aristotelian) propositional logic. In binary notation, only the digits 0 and 1 are used, and each successive place represents another power of two. Binary is also the justification for octal (base 8) and hexadecimal (base 16) arithmetic, since those powers of two can represent groups of binary digits.
This section will introduce you to another notation: ternary numbers. It's called ternary because there are three digits and each successive place represents another power of three, but instead of using the digits 0, 1, and 2 (which I would call trinary or base 3), ternary uses the digits 0 and 1 and the minus sign −, which represents −1. Since the biggest benefits of this approach result from the symmetry of 1 and −1, this notation is also called balanced ternary. Just like a single binary digit is called a bit, a single ternary digit should be called a tert (not trit or tit).
In ternary, the numbers zero and one are represented by 0 and 1, as in binary and decimal (base 10) notation. It first becomes interesting at the number two, which is represented in ternary as 1−. The digit 1 is in the 3s place, representing the value 3, and the digit − is in the 1s place, representing the value −1. Adding 3 and −1 together gives us 2. Likewise, three is written 10, and four is written 11. Here are the first few numbers in ternary (in bold):
In decimal notation, negative numbers are preceded by a minus sign, which is kind of an 11th digit, since you can't write all the integers without it. (It's too bad that we don't use a leading 0 to represent negative numbers instead, since the two uses are disjoint.) In binary computers, negative integers are represented in two's−complement notation, which requires an upper limit on the size of integers. If 16−bit integers are used, the highest bit, which would normally represent 32,768 (2 to the 15th power), instead represents −32,768! This makes the 16−bit binary representation of −1 a series of sixteen 1s: 1111111111111111, which is to be interpreted as 32767−32768. Arcane!
In contrast, −1 in ternary notation is simply −, while −2 is −1 and −3 is −0. In general in ternary notation, negative numbers start with −, and the negation of any number just replaces 1s with −s and vice versa. The number eight is 10−, and negative eight is −01. As you saw in Shwa's Reverse notation for numbers, a balanced notation makes negative integers much cleaner.
We won't go much deeper into ternary numbers here, but we will display the addition, subtraction, and multiplication tables (with A on the left, positive numbers in blue, negative numbers in red, and zero in green):
And here are some examples:
While many of you are familiar with binary numbers, few wouldn't need a quick recap of classical bivalent propositional logic before proceeding to the trivalent case. This logic is also called Boolean logic after the logician George Boole, although many thinkers from Aristotle to Bertrand Russell made major contributions. The word bivalent means two-valued: in this system, propositions are either true or false.
We aren't going to concern ourselves with the formal aspects of logic, the notions of proof or the development of theorems from axioms. Instead, we're going to offer a whirlwind tour of bivalent notation to prepare you for what follows.
The relationship of bivalent logic to binary arithmetic is one of homology, meaning that entities and relationships in one field often correspond to similar ones in the other. For example, consider the bivalent operations of disjunction ∨ and conjunction ∧, the logical operations corresponding to our words or (in its inclusive sense, where both choices might be true) and and.
The way to interpret the table on the left is that if proposition A is false and proposition B is false, then proposition A∨B (A or B) is false; otherwise, it's true. In other words, if either of two propositions is true, then their disjunction is, too. Likewise, the interpretation of the table on the right is that if proposition A is true and proposition B is true, then proposition A∧B (A and B) is true; otherwise, it's false. In other words, if either of two propositions is false, then their conjunction is, too.
There are actually two homologies with binary arithmetic. The first matches the two operations above with the binary max and min operations, which return the larger and smaller of two numbers, respectively. The number one is assigned to the truth value true, and zero is assigned to the truth value false.
The other homology with arithmetic matches the two logical operations with binary addition and multiplication, but involves the signs of the numbers, not the numbers themselves. In this case, all the positive numbers are assigned to the truth value true.
There is even a third leg to this homology: set theory. In elementary set theory, we are concerned with whether an element e is in set A, symbolized as e∈A, or not, symbolized as e∉A. The set of all elements which are in either of two sets is called their union ∪, while the set of all elements which are in both of two sets is called their intersection ∩. Here are their tables:
union of sets|
intersection of sets|
These two operators, disjunction/addition/union and conjunction/multiplication/intersection, are called binary or two-place operators because they depend on two arguments, or inputs. They're also called connectives because they connect two values. An operator that depended on only one input is called unary or one-place, and an operator that doesn't depend on any inputs is called a constant or zero-place operator.
There are five other two-place operators in bivalent logic that are interesting to us, plus a single one-place operator. Here are their truth tables:
Negation changes 1s to 0s and vice versa. Surprisingly, its homologue in binary arithmetic is not negation, but complementation: 1-A, which is also its homologue in set theory. We'll be talking much more about negation below.
Subtraction is homologous with the same operations in arithmetic and set theory. A- B is synonymous with A∧¬B, just as it is with A×-B and A∩-B. (Note that in ordinary arithmetic, A-B means A+-B.)
Alternation is also called exclusive-or or XOR. It is the equivalent of the usual English meaning of the word or, which excludes both. If you say "Would you like red wine or white?", you are usually offering a choice, not both. In bivalent logic, this is synonymous with A∨B - A∧B.
Implication, also called the conditional, is the operation most fraught with profound meaning, which unfortunately we can't explore here. Its homologues are arithmetic less-than-or-equal-to ≤ and set inclusion ⊆. In bivalent logic, this is synonymous with ¬A∨B: either A is false or B is true. I didn't bother showing the reversed version <=/≥/⊇, which just points in the other direction.
Equality, properly called equivalence, coimplication, or the biconditional, is a synonym for implication in both directions: A<=>B means A<=B and A=>B. The homologues are equality = in both arithmetic and set theory.
Inequality is the negation of equality, homologous with inequality ≠. Note that its truth table is identical with that of alternation, but I have shown them both since they differ in the trivalent case.
There's one more, degenerate, unary operator, Assertion, which is the Identity operator: the assertion of a true proposition is true, and the assertion of a false proposition is false. I mention it to be complete, and also because it's interesting to know that asserting a proposition is equivalent to asserting that it's true.
These truth tables can also be expressed in columnar form, where each entry has its own row, but we don't need to do that here. They can also be expressed as rules of inference, along the lines of "If A is true and B is true, then A∧B is true". Rules of inference might also work backwards, like "If A∧B is true, then A is true (and B is true)". Or they could do both, like "If A is true and A→B is true, then B is true". This last one is called modus ponens, and is the fundamental rule of inference.
Using either truth tables or rules, we could construct proofs of propositions, where each step derives logically from previous steps. Often, these proofs start with certain assumptions and try to deduce the consequences, but sometimes there are no assumptions, and so the proposition is universally true.
One universal truth in classical logic is ¬(A ∧ ¬A), which is called the law of non-contradiction. It states that a proposition can't be true and false at the same time: that would be a contradiction.
Another universal truth is A ∨ ¬A, called the law of the excluded middle or the law of bivalence, which asserts that every proposition is either true or false - there's no other choice. It could be expressed as A | ¬A, since it's never true that both A and ¬A are true at the same time. In fact, if you assume B and are able to derive a contradiction such as A∧¬A , there must be something wrong with B - this type of reasoning is called reductio ad absurdum.
Oddly enough, this type of thing isn't what logicians do! Instead, they abstract a level or two, and make assertions about whole logical systems at once. The bivalent logic presented in this section is one such system, and in fact is used as a basis for many systems, such as predicate logic, which introduces quantifiers like for all ∀ and there exists ∃, and modal logic, which introduces modal operators like ◻ Necessary and ⋄ Possible.
In the 1930s, a logician named Kurt Gödel proved that any classical logical system powerful enough to be useful would either be inconsistent (meaning it could prove contradictions) or incomplete (meaning it would leave some propositions unprovable). This result, which is called the Incompleteness Theorem, was quite a blow to philosophy, as it seems to state that some things must always remain unknowable.
Among the alternatives that have been explored are bivalent logics that reject the law of bivalence. At least one of these systems (Fitch) can be proven to be both consistent and complete, and it's powerful enough to serve as the basis for arithmetic. It's not that there's any alternative to a statement being either true or false - the system is still bivalent - but that bivalence isn't a law, or axiom, and thus reductio ad absurdum doesn't work.
That's too bad, because if you had a system where reductio ad absurdum worked, and you showed (using the Incompleteness Theorem) that the law of bivalence led to inconsistency, then you would have proven that the universe isn't bivalent!
The logic above can be extended to cover cases when we don't know whether a proposition is true or not, for example because it refers to the future. This is called modal logic, and the two traditional modal operators are Necessity and Possibility, represented by ◻ and ⋄, respectively. We also use the word Impossible as a shorthand for "not Possible". By definition, if a proposition is Necessary, then it must also be Possible.
For example, ◻Barça will win the Champions League means it is necessary that Barça win the Champions League, or Barça will necessarily win the Champions League, or simply Barça must win the Champions League. ⋄ Barça will win the Champions League means it is possible that Barça will win the Champions League, or Barça will possibly win the Champions League, or simply Barça might win the Champions League.
There is a set of relationships between these two operators, mediated by negation:
For instance, the first one says that if p must be true, then -p can't be true (it's not possible that -p be true).
There is a strong homology here with quantification, and in fact modal logic can't be seen as quantification over a set of possible future worlds, or possible unknown facts:
So Necessary, Possible and Impossible correspond to English All/Every/Each, Some/A and No/None/Not Any, and ◻ ⋄ correspond to ∀ ∃.
I'm going to take the bivalent case one step further, so you'll recognize it when you see it in the trivalent case.
The proposition p, whose truth value is unknown to us, can be described as Necessary, Possible or Impossible. If it's Necessary, it's also Possible, so I'm going to introduce a new term, Potential, to mean "Possible but not Necessary". Then a proposition must be Necessary, Potential or Impossible, but only one of the three. Likewise, the negation of a proposition could be Necessary, Potential or Impossible, but only one of the three.
On the face of it, that gives us nine combinations. But given the laws of bivalence and non-contradiction, it turns out that there are only three viable combinations :
I'm going to call these three combinations Modalities, and give them the following names:
In Shwa logic, there are three truth values: true, false, and wrong. True means the same in Shwa as it does in classical logic, but false means something different. In classical logic, false means not true, but in Shwa A is false means the negation of A is true.
That's a subtle difference, but consider the proposition "The King of France is Japanese". That statement is clearly not true, so in classical logic it's false. But its negation, "The King of France is not Japanese", is also not true, since there is no King of France (not since 1789, although France has had a few Emperors since then). So in Shwa, both the original statement and its negation are wrong. That's what wrong means: that neither the statement nor its negation are true.
[To be fair, some classical logicians would say that "The King of France is Japanese" is not a proposition, since it has no referent. Others would say that the statement is false, and that its negation is "It's not true that the King of France is Japanese", which is true. Yet others would say that it means "There is a King of France, and he's Japanese", which is false and has the negation "There is no King of France or he's not Japanese", which is true. Still others would say that it means that anybody who is the King of France is also Japanese, in other words that being the King of France implies being Japanese, and since the premise of the conditional is always false - there is no King of France - then the proposition as it reads is true, and so is its apparent negation!]
If a proposition is Wrong, that doesn't mean it's meaningless, like "Colorless green arrows sleep furiously". The sentence "The King of France is Japanese" isn't meaningless; there just happens not to be a King of France right now. This is a case of missing referent, but not all Wrong propositions lack referents. For instance, "Shakespeare left Alaska by boat" isn't missing any referents: both Shakespeare and Alaska existed, and so do boats. But Shakespeare never went to Alaska, so he could never have left it by boat or any other way. You could say the proposition is false, but it's a funny kind of false, since it's negation, "Shakespeare didn't leave Alaska by boat", is also false. That's called presupposition failure.
But the best examples of wrong statements are in the middle between true and false. Imagine that it's not really raining, but it's drizzling, so it seems wrong to say "It's not raining". Or the proposition that zero is a natural number, or that i ≥ -i (where i represents √-1) : they're neither true nor untrue. Finally, you can use Wrong to respond to a query where neither yes nor no seem to be truthful, for instance if I ask you whether the stock market went up after the Great Crash of 1929. Well, yes it did, but first it fell, and it remained below its previous levels for many years afterwards. It doesn't really matter how a Wrong statement is untrue, as long as it's negation is also untrue.
But a proposition isn't Wrong just because you don't know whether it's true or not, for instance because it's in the future. Both future events and simple ignorance are examples of modal statements, which we'll discuss below.
Bivalent logic is deeply embedded in English, which makes it difficult to express trivalent statements. To compensate, I'll use the English words false, negation, and not only to indicate falseness, and the words wrong, objection and neither to indicate wrongness, as in "It's neither raining (nor not raining)".
By the way, the three truth values of Shwa ternary logic - True, False and Wrong - are homologous with the three digits of ternary numbers 1 − 0 , and also with the three signs (plus, minus, and zero) of real arithmetic. Because of that, from now on I'll put Wrong before False, so the normal order will be True - Wrong - False, with Wrong in the middle.
Now that you know what Wrong means, and how it's different from False, let's consider how trivalent logic works.
The most important operators are ¬ and ~: ¬p means "the negation of p", and ~p means "the objection of p". Here are their truth tables:
I added a third unary operator, Assertion, to the end of the chart. It's not very important, except to note that saying something is equivalent to saying it's true. For example, "Roses are red" is equivalent to "It's true that roses are red".
Let's also discuss some two-place connectives. The first two are straightforward extensions of the bivalent forms:
These two connectives are homologous with ternary maximum and minimum, respectively.
The implication connective is homologous with ≤, and coimplication with =. As in the bivalent case, coimplication is the homologue of equality. In other words, if A<=B and A=>B, then A<=>B.
As you were warned, trivalent inequality is not equivalent to alternation.
The trivalent alternation operator I show here has an advantage over the bivalent. Bivalent alternation cannot be chained as can conjunction and disjunction: A | B | C will be true if they're all true, not just one (no matter how you associate it). But trivalent alternation can be chained: A | B | C will be true if and only if just one of the three is true, and it will only be false if all of them are false.
There are two other connectives in trivalent logic that have no bivalent homologues, although they do in ternary arithmetic: addition (ignoring carries) and multiplication:
As in the bivalent logic described above, Shwa also has a law of trivalence, which states that a proposition must be either true or false or wrong, and a law of non-contradiction which states that it can be only one of the three. And we can derive the Shwa equivalent of DeMorgan's Laws:
As I mentioned above, wrong has nothing to do with the question of whether you know a truth value. Propositions are wrong because they are neither true nor false, not because you don't know whether they're true or false. Instead, there is a whole set of modalities to specify precisely how much you know about a proposition.
In the section above on bivalent modal logic, I introduced two modal operators, ◻ and ⋄. We use the same two in trivalent logic, with the same meanings. However, the relationships linking them via negation are weaker:
In particular, as the last line shows, if you can eliminate one of the truth values, you can't assume it's the other, as you can in the bivalent case.
In fact, there are a total of seven possible modalities:
For example, consider the sentence "Santa Claus likes milk and cookies". Well, if he exists, it's true, but we don't know whether he really exists (I've seen him many times, but I'm still skeptical). If he doesn't exist, the sentence is wrong. Since we don't know which it is, the sentence is Certainly Not False. But if it turns out that he does exist but he doesn't like milk and cookies, then the sentence was false, and it was also false to say that it was Certainly Not False, but it wasn't wrong to say so!
You may be wondering what the benefit is of all this complication. First of all, it brings sentences like "I did not have sex with that woman" into the purview of logic, as opposed to dismissing them as aberrant.
More interestingly, trivalent logic can actually draw sure conclusions from unsure premises. In bivalent modal logic, there is no middle ground between knowing nothing about the truth of a proposition and knowing everything about it. But in trivalent modal logic, we can know something about the truth of a proposition.
|< Social Units||Home >|
|© 2002-2013 [email protected]||15may13| | http://www.shwa.org/logic.htm | 13 |
76 | Visit The Physics Classroom's Flickr Galleries and take a visual tour of projectile motion.
Initial Velocity Components
It has already been stated and thoroughly discussed that the horizontal and vertical motions of a projectile are independent of each other. The horizontal velocity of a projectile does not affect how far (or how fast) a projectile falls vertically. Perpendicular components of motion are independent of each other. Thus, an analysis of the motion of a projectile demands that the two components of motion are analyzed independent of each other, being careful not to mix horizontal motion information with vertical motion information. That is, if analyzing the motion to determine the vertical displacement, one would use kinematic equations with vertical motion parameters (initial vertical velocity, final vertical velocity, vertical acceleration) and not horizontal motion parameters (initial horizontal velocity, final horizontal velocity, horizontal acceleration). It is for this reason that one of the initial steps of a projectile motion problem is to determine the components of the initial velocity.
Earlier in this unit, the method of vector resolution was discussed. Vector resolution is the method of taking a single vector at an angle and separating it into two perpendicular parts. The two parts of a vector are known as components and describe the influence of that vector in a single direction. If a projectile is launched at an angle to the horizontal, then the initial velocity of the projectile has both a horizontal and a vertical component. The horizontal velocity component (vx) describes the influence of the velocity in displacing the projectile horizontally. The vertical velocity component (vy) describes the influence of the velocity in displacing the projectile vertically. Thus, the analysis of projectile motion problems begins by using the trigonometric methods discussed earlier to determine the horizontal and vertical components of the initial velocity.
Consider a projectile launched with an initial velocity of 50 m/s at an angle of 60 degrees above the horizontal. Such a projectile begins its motion with a horizontal velocity of 25 m/s and a vertical velocity of 43 m/s. These are known as the horizontal and vertical components of the initial velocity. These numerical values were determined by constructing a sketch of the velocity vector with the given direction and then using trigonometric functions to determine the sides of the velocity triangle. The sketch is shown at the right and the use of trigonometric functions to determine the magnitudes is shown below. (If necessary, review this method on an earlier page in this unit.)
All vector resolution problems can be solved in a similar manner. As a test of your understanding, utilize trigonometric functions to determine the horizontal and vertical components of the following initial velocity values. When finished, click the button to check your answers.
Practice A: A water balloon is launched with a speed of 40 m/s at an angle of 60 degrees to the horizontal.
Practice B: A motorcycle stunt person traveling 70 mi/hr jumps off a ramp at an angle of 35 degrees to the horizontal.
Practice C: A springboard diver jumps with a velocity of 10 m/s at an angle of 80 degrees to the horizontal.
Try Some More!
As mentioned above, the point of resolving an initial velocity vector into its two components is to use the values of these two components to analyze a projectile's motion and determine such parameters as the horizontal displacement, the vertical displacement, the final vertical velocity, the time to reach the peak of the trajectory, the time to fall to the ground, etc. This process is demonstrated on the remainder of this page. We will begin with the determination of the time.
The time for a projectile to rise vertically to its peak (as well as the time to fall from the peak) is dependent upon vertical motion parameters. The process of rising vertically to the peak of a trajectory is a vertical motion and is thus dependent upon the initial vertical velocity and the vertical acceleration (g = 9.8 m/s/s, down). The process of determining the time to rise to the peak is an easy process - provided that you have a solid grasp of the concept of acceleration. When first introduced, it was said that acceleration is the rate at which the velocity of an object changes. An acceleration value indicates the amount of velocity change in a given interval of time. To say that a projectile has a vertical acceleration of -9.8 m/s/s is to say that the vertical velocity changes by 9.8 m/s (in the - or downward direction) each second. For example, if a projectile is moving upwards with a velocity of 39.2 m/s at 0 seconds, then its velocity will be 29.4 m/s after 1 second, 19.6 m/s after 2 seconds, 9.8 m/s after 3 seconds, and 0 m/s after 4 seconds. For such a projectile with an initial vertical velocity of 39.2 m/s, it would take 4 seconds for it to reach the peak where its vertical velocity is 0 m/s. With this notion in mind, it is evident that the time for a projectile to rise to its peak is a matter of dividing the vertical component of the initial velocity (viy) by the acceleration of gravity.
Once the time to rise to the peak of the trajectory is known, the total time of flight can be determined. For a projectile that lands at the same height which it started, the total time of flight is twice the time to rise to the peak. Recall from the last section of Lesson 2 that the trajectory of a projectile is symmetrical about the peak. That is, if it takes 4 seconds to rise to the peak, then it will take 4 seconds to fall from the peak; the total time of flight is 8 seconds. The time of flight of a projectile is twice the time to rise to the peak.
The horizontal displacement of a projectile is dependent upon the horizontal component of the initial velocity. As discussed in the previous part of this lesson, the horizontal displacement of a projectile can be determined using the equation
x = vix
If a projectile has a time of flight of 8 seconds and a horizontal velocity of 20 m/s, then the horizontal displacement is 160 meters (20 m/s 8 s). If a projectile has a time of flight of 8 seconds and a horizontal velocity of 34 m/s, then the projectile has a horizontal displacement of 272 meters (34 m/s 8 s). The horizontal displacement is dependent upon the only horizontal parameter that exists for projectiles - the horizontal velocity (vix).
A non-horizontally launched projectile with an initial vertical velocity of 39.2 m/s will reach its peak in 4 seconds. The process of rising to the peak is a vertical motion and is again dependent upon vertical motion parameters (the initial vertical velocity and the vertical acceleration). The height of the projectile at this peak position can be determined using the equation
y = viy
t + 0.5 g t2
where viy is the initial vertical velocity in m/s, g is the acceleration of gravity (-9.8 m/s/s) and t is the time in seconds it takes to reach the peak. This equation can be successfully used to determine the vertical displacement of the projectile through the first half of its trajectory (i.e., peak height) provided that the algebra is properly performed and the proper values are substituted for the given variables. Special attention should be given to the facts that the t in the equation is the time up to the peak and the g has a negative value of -9.8 m/s/s.
Answer the following questions and click the button to see the answers.
1. Aaron Agin is resolving velocity vectors into horizontal and vertical components. For each case, evaluate whether Aaron's diagrams are correct or incorrect. If incorrect, explain the problem or make the correction.
2. Use trigonometric functions to resolve the following velocity vectors into horizontal and vertical components. Then utilize kinematic equations to calculate the other motion parameters. Be careful with the equations; be guided by the principle that "perpendicular components of motion are independent of each other."
3. Utilize kinematic equations and projectile motion concepts to fill in the blanks in the following tables. | http://www.physicsclassroom.com/class/vectors/u3l2d.cfm | 13 |
119 | This tutorial exhibits some of the applications
of integral calculus. Although these are only some of the applications
of integration, they will give you an idea of how calculus can be used
to solve problems in other areas of mathematics.
The area A of the region bounded by the curves,
f(x) and g(x), and the lines x=a and x=b, where f and g are continuous
and f(x)>g(x) for all x in the interval [a, b], is given by
The figure to the right illustrates the region
that is given by the definition above. To make the image clearer, the
curve f(x) is shown in blue and the curve g(x) is shown in red.
In some problems, the curves may intersect so
that f(x) is not greater than g(x) over the entire interval [a, b]. The
graph to the right illustrates this situation. In this case, we must
find the point of intersection, c, between the two curves. To find the
point of intersection c, we set f(x) = g(x) and solve the resulting
equation for x. To find the area, we split the
region A between the curves into 2 separate regions, A1,
bounded by f(x) and g(x) and the lines x=a and x=c, and A2,
bounded by g(x) and f(x) and the lines x=c and x=b. To find the area of
A, we calculate the area of each individual region and take the sum of
For the curve to the right, the equation for the
area of the region would be given by
Note: This equation
depends on the nature of the question. Not all problems will use this
same equation to determine the area. For example, if the curve g(x) is
greater than f(x) on [a,c), and less than f(x) on (c,b], we would
switch the f(x) with g(x) in the equation above.
In other problems, the region may be bounded by
the curves f(x) and g(x) over part of the interval [a,b] but may be
bounded by only a single curve for another part of the interval. The
figure below illustrates this situation.
In a problem like this, there are two possible
a) The first method is to find the point of
intersection, c, between the two curves. We split the region A between
the curves into 2 separate regions, A1, bounded by f(x) and
g(x) and the lines x=a and x=c, and A2, bounded by +f(x) and
-f(x) and the lines x=c and x=b. The image to the right illustrates the
first method of solving this type of area problem. The formula to the
below shows how the area would be calculated for this specific example.
Note: Like the problem
above, this equation depends on the nature of the question. You must
recognize which function defines the upper boundary and which defines
the lowers boundary over a certain interval.
b) The second method involves rewriting the
problem so that it is in terms of y rather than x. We express each
curve in terms of y, so that we have f(y) for the right boundary and
g(y) for the left boundary. We must find the y coordinates of the
intersection points of the curves. We let y=b represent the higher
intersection point and y=a represent the lower intersection point. The
image to the right illustrates the second approach to solving the
example above. The equation below shows how the area would be
calculated for this specific example.
Note: Once again, this
equation depends on the nature of the question.
In fact, this method can be used to solve any
area problems. It may often be simpler to solve the integrals when the
functions in terms of y rather than x. If the problem seems too
difficult, we can always try to solve it in terms of y.
| Find the area of the region between the curves
| Find the area of the region between the intersecting curves
| Find the area of the region between the curves, with respect to y
Integration also allows us to calculate the
volumes of solids. Let S be a solid that lies between x=a and x=b. Let
the continuous function A(x) represent the cross-sectional area of S in
the plane through the point x and perpendicular to the x-axis. The
volume of S is given by
Now that we have the definition of volume, the
challenging part is to find the function of the area of a given cross
section. This process is quite similar to finding the area between
Most volume problems that we will encounter will
be require us to calculate the volume of a solid of rotation.
These are solids that are obtained when a region is rotated about some
line. A typical volume problem would ask, "Find the volume of the solid
obtained by rotating the region bounded by the curve(s) about some
specified line." Since the region is rotated about a specific line, the
solid obtained by this rotation will have a disk-shaped cross-section.
We know from simple geometry that the area of a circle is given by A = p r2. For each cross-sectional disk,
the radius is determined by the curves that bound the region. If we
sketch the region bounded by the given curves, we can easily find a
function to determine the radius of the cross-sectional disk at point x.
The figures above illustrates this concept. The
figure to the left shows the region bounded by the curve and the x-axis
and the lines x = 0 and x = 2. The figure in the center shows the
3-dimensional solid that is formed when the region from the first
figure is rotated about the x-axis. The figute to the right shows a
typical cross-sectional disk. A disk for a given value x between 0 and
2 will have a radius of . The area of the disk is given by A(x) = p ()2 or equivalently, A(x) = px. Once we find the area function, we simply
integrate from a to b to find the volume. The examples below will show
complete solutions to finding the area of a given solid.
Variations of Volume Problems
There are several variations in these types of
problems. The first factor that can vary in a volume problem is the
axis of rotation. What if the region from the figure above was rotated
about the y-axis rather than the x-axis? We would end up with a
different function for the radius of a cross-sectional disk. The
function would be written with respect to y rather than x, so we would
have to integrate with respect to y. In general, we can use the
If the region bounded by the curves f(x) and g(x)
and the lines x=a and x=b is rotated about an axis parallel to the
x-axis, write the integral with respect to x. If the axis of rotation
is parallel to the y-axis, write the integral with respect to y.
The second factor that can vary in volume
problems is the radius of a typical cross-sectional disk. Suppose that
the region is bounded by two curves, f(x) and g(x), that both vary
between x=a and x=b. The solid that is created by rotating this region
about some specified line will have a hole in the center. The radius of
a cross-sectional disk will be determined by two functions, rather than
a single function. This variation creates two separate styles of
The Disk Method : The disk method is used
when the cross sections are disk shaped. The radius of a cross section
is determined by a single function, f(x). The area of the disk is given
by the formula, A(x) = p (radius)2.
The figure to the right shows a typical cross-sectional disk.
The Washer Method : The washer method is
used when the cross sections are washer shaped. The radius of a cross
section is determined by a two functions, f(x) and g(x). This gives us
two separate radii, an outer radius, from f(x) to the axis of rotation
and an inner radius from g(x) to the axis of rotation. The area of the
washer is given by the formula, A(x) = p [
(outer radius)2-(inner radius)2 ]. The figure to
the left shows a typical cross-sectional washer.
| Find the volume of the solid (axis of rotation parallel to y-axis)
| Find the volume of the solid using the washer method
There is another method to solve volume problems,
in addition to the methods described in the section above. The method
of cylindrical shells is sometimes simpler to use than the previous
methods. For example, suppose we want to find the volume of the solid
obtained by rotating about the y-axis, the region bounded by the y=3x-x3
and the line y=0. Since the axis of rotation is parallel to the y-axis,
we must integrate the area function with respect to y. To do this, We
must solve the cubic function for x in terms of y. This would be rather
difficult. However, we will soon see that this problem can be solved
quite easily using the method of cylindrical shells.
The image to the right shows a cylindrical shell
with outer radius r2, inner radius r1 and height
h. We can calculate the volume of the shell by finding the volume of
the inner cylinder, V1 and subtracting it from the volume of
the outer cylinder, V2. Recall from the geometry tutorial,
that the volume of a cylinder is given by the formula V = pi r2h.
We let represent the thickness of
the cylindrical shell and r represent the average radius of the shell.
In summary, the volume of a cylindrical shell is given by the following
Now that we have covered the concept of a
cylindrical shell, we can apply it to general volume problems. The idea
behind the method of cylindrical shells is to think of a 3-dimensional
solid as a collection of cylindrical shells. To find the volume of the
solid, we must integrate the formula for the volume of a cylindrical
shell. However, the formula for the volume of the cylindrical shell
will vary with each problem. We must find functions for the height and
the radius of the cylindrical shell at x. Suppose the functions are
h(x) for height and r(x) for radius. The volume of the solid obtained
by rotating a region about a specific line from a to b is given by
This may seem complicated, but after a few
examples the method will be much clearer.
| Find the volume of the solid using the method of cylindrical shells
| Find the volume of the solid
For more practice with the concepts covered in
this tutorial, visit the Area and Volume Problems page
at the link below.
The solutions to the problems will be posted after these chapters are
covered in your calculus course.
Area & Volume
To test your knowledge of applications of integration problems, try
general area and volume test on the iLrn website or the
area and volume test at the link below.
General Area & Volume Test on iLrn
Advanced Area &
of Page | | http://calculus.nipissingu.ca/calculus/tutorials/area_volume.html | 13 |
97 | The least known and appreciated of the conic sections has some interesting applications nevertheless
The hyperbola is the least known and used of the conic sections. We seldom see a hyperbola in daily life, and it seldom appears in decoration or design. In spite of this, it has interesting properties and important applications. There is a literary term, hyperbole, that is the same word in Greek, meaning an excess. How the hyperbola acquired this name is related in Parabola, together with some general information on conic sections, and the focal definition of the hyperbola.
The feature of the hyperbola is its asymptotes. A curve is said to approach a straight line as an asymptote when for any distance ε you may choose, there is always a point on the line beyond which the curve is closer to it than ε. This is, of course, in a certain direction along the line that extends to infinity. A hyperbola has two asymptotes that make equal angles with the coordinate axes and pass through the origin O. Near the origin, the hyperbola passes from one asymptote to the other in a smooth curve. There are two branches of the hyperbola, starting from opposite ends of the asymptotes. For most practical purposes, the hyperbola can be considered as the asymptote itself except in the neighborhood of the origin.
A hyperbola is sketched at the right. The origin is O, and the asyptotes form a symmetrical cross as shown. V and V' are the vertices of the hyperbola, at a distance a on each side of the origin. Perpendicular lines from V and V' define a rectangle by their points of intersection with the asymptotes, and the sides of this rectangle are a and b. Two parameters are required to specify a hyperbola, as for an ellipse. The slope of the asymptotes is |b/a|. Then, the hyperbola can be represented as the quadratic curve (x/a)2 - (y/b)2 = 1, the canonical equation of a hyperbola.
The foci F and F' are located a distance c > a from the origin, where c is the hypotenuse of the right triangle whose sides are a and b. If you draw the reference rectangle for the hyperbola, the foci can be located quite simply by swinging an arc. The difference in the distances F'P and FP from the foci to any point P on the hyperbola is equal to 2a. It is not difficult to prove that this definition is equivalent to the canonical equation. Moreover, as the sketch indicates, the angle between FP and the normal to the hyperbola is equal to the angle between the normal and F'P, so a ray from F is reflected by the hyperbola so that it appears to be coming from the other focus. This is the analogue to the reflecting properties of the parabola and ellipse. The ratio c/a is the eccentricity of the hyperbola, and is > 1. We see that b = a(e2 - 1)1/2, and that the semi-latus rectum p = b2/a. The latter is derived from the right triangle with legs p and 2c, whose hypotenuse must be of length p + 2a from the focal definition.
As the other conic sections, the hyperbola has conjugate diameters. To exhibit them, we need the conjugate hyperbola, which is constructed on the same reference rectangle. Its equation is obtained by changing +1 to -1 in the canonical equation, or by interchanging a and b. Its foci F''' and F"" are the same distance c from the origin, so all four foci lie on a circle. Of course, the asymptotes are the same. A diameter, such as AB, is any line passing through O that intersects the two branches of the hyperbola. The conjugate diameter CD is drawn between the points of tangency of lines parallel to the diameter that touch the conjugate hyperbola. The conjugate diameter bisects all these chords (it does not seem so in the sketch, because the curves are not accurate). This property may be used to construct normals and tangents as an alternative to the focal property.
The polar equation of the hyperbola is r = p / (1 + e cos ω), which gives both branches as ω goes from 0 to 2π, one branch corresponding to negative values of r. The asymptotic directions are given by ω = cos-1 (1/e). A parametric equation is x = a cosh t, y = b sinh t, using hyperbolic functions, and another is x = a sec t, y = b tan t. Finally, a hyperbola is the intersection of a cone (really, a double cone extending in both directions) with a plane with an inclination greater than the cone angle.
The draftsman is not often required to draw hyperbolas. It is easiest to draw one from the focal definition. An arc is drawn from F with any radius r, and this is intersected by an arc drawn from F' with radius r - 2a. It is easy to locate the foci when a and b are given, so this process is convenient. The intersections are good near the origin, but become poor farther out on the asymptotes. However, they are not needed here.
When b = a, a special curve is obtained that bears the same relation to the hyperbola as the circle bears to the ellipse. The reference rectangle becomes a square, and the asymptotes make angles of 45° with the axes, and are perpendicular to each other. This is called the equilateral hyperbola, and all these curves are the same shape, differing only in size. The canonical equation becomes x2 - y2 = a2. If the asymptotes are taken as the coordinate axes, the result is xy = a2/2, or xy = constant, a pleasantly elegant result. Isotherms of an ideal gas, pV = nRT, are equilateral hyperbolas. Other examples of this relationship can be found. Unlike circles, equilateral hyperbolas are not good wheels, and are not as easy to draw.
If one point P is known on an equilateral hyperbola, another P' can be found by the construction sketched at the right. Horizontal and vertical lines AC and PE are drawn through P. Then any point B on AO is chosen, and a horizontal line drawn through B intersecting PE at D. Now a line OC is drawn from the orgin through D to a point C on the horizontal line through P. The intersection of the horizontal through B and a vertical line through C determines the second point P'. We see that AC/OE = AO/BO from similar triangles, so AC · BO = OE · AO, which is just xy = x'y'
An example of an equilateral hyperbola occurring in nature is shown at the left. Two parallel glass plates in contact at the left, and separated by about 5 mm at the right, are dipped in beet juice, which rises by capillarity to form an equilateral hyperbola. This can be shown as follows: if the separation of the plates is d = ax cm, and the surface tension is T dyne/cm, then by equating the upward capillary force to the weight of the fluid supported in a small distance dx, 2Tdx = ρgyaxdx, since the angle of contact is zero. Therefore, xy = 2T/ρga = constant, so the curve is a rectangular hyperbola.
The hyperbolic functions mentioned above are combinations of exponentials and their connection with the hyperbola is not obvious. From their names, they are analogous to the trigonometric functions. In fact, hyperbolic functions are related to the unit rectangular hyperbola x2 - y2 = 1 just as the trigonometric functions are related to the unit circle x2 + y2 = 1. If we introduce a parameter t, then the unit circle is expressed by x = cos t, y = sin t. Similarly, the unit hyperbola can be expressed as x = cosh t, y = sinh t. The parameter t in both cases can be interpreted as twice the area swept out by a radius vector from the origin O to a point P on the circle or hyperbola. For the circle, this relation is obvious, since A = (t/2π)(π) = t/2, where π is the area of the unit circle. These relations are shown in the figure. Relations between the functions are easily derived by using the properties of right triangles and the equations of the circle or the hyperbola.
For the hyperbola, we may make the linear substitution x - y = η√2 and x + y = ξ√ that rotates the hyperbola to the first quadrant in the (ξ,η)-plane, where its equation is ξη = 1/2. It is a little tricky to find an easy way to find the area A and show that it equals 2t. The area we are seeking A = area OAPQ - ΔOPQ, while area ABQP is area OAPQ - ΔOAB. The two triangles are of the same area, however, since their areas are ξη/2, which is a constant on the hyperbola. Twice the area is then easily seen to be 2A = 2∫((x+y)/√2,1/√2) dη/2η = ln(x+y) = ln[x ± √(x2 - 1)]. But, cosh t = x so that t = cosh-1x = ln[x ± √(x2 - 1], and so 2A = t, just as for the circular funtions.
The logarithmic function for the inverse of x = cosh t may be surprising. However, since cosh t = (et + e-t)/2 = x, or 2x = u + 1/u with the substitution u = et. The quadratic equation for u has the roots u = x ± √(x2 - 1), from which t = cosh-1 x = ln[x ± √(x2 - 1)]. Similar formulas for sinh-1x and tanh-1x exist. For example, we can write tanh t = x = (u - 1/u)/(u + 1/u), so u2 = (1 + x)/(1 - x), and t = (1/2)ln[(1 + x)/(1 - x)], valid for |x| < 1.
The relation cosh2t - sinh2t = 1 is also easily derived by expressing the hyperbolic functions in terms of exponentials. Every trigonometric relation has a hyperbolic analogue, perhaps differing by a minus sign. To find cosh(a + b), for example, use ea = cosh a + sinh a and eb = cosh b + sinh b in 2cosh(a + b) = eaeb + e-ae-b. Multiply out and combine terms. Of course cosh -a = cosh a and sinh -a = -sinh a. The result is cosh(a + b) = cosh a cosh b + sinh a sinh b. For the derivatives, we find d(cosh x)/dx = sinh x and d(sinh x)/dx = cosh x. By the inverse function rule, d(sinh-1x)/dx = 1/cosh x = 1/√(x2 - 1).
Elliptic functions are not defined analogously to circular and hyperbolic functions, but in terms of certain elliptic integrals, so-called because they solve problems associated with the elllipse, such as arc length.
Newtonian mechanics tells us that any of the conic sections can be an orbit, and we have investigated the cases of planets (ellipses) and comets (parabolas) in the pages on those curves. The hyperbolic orbit is the path of a particle under an inverse-square force that approaches the center of attraction or repulsion at a finite speed along an asymptote, is deflected, and recedes in the same way along the other asymptote. The effect is to change the direction of motion of the particle, without changing its speed. As always, we consider the center of force to be fixed. All two-body problems can be reduced to this case. Our conclusions apply only to the reference system in which the center is fixed. The motion takes place with constant areal velocity A = h/2, where h is a constant related to the angular momentum.
There are no examples of celestial bodies with hyperbolic orbits about the sun. They are not impossible, merely very unlikely, and probably have occurred from time to time. Anything other than volatile cometary debris would probably not be noticed unless it was quite large and dangerous. Such encounters have been blamed for the Moon, but this is just wild speculation. Hyperbolic orbits could be created within the solar system, by certain types of gravitational encounters, or by rockets, but escaping from the Sun is rather difficult.
Alpha particles are the nuclei of helium atoms, with mass 4 and a positive charge of 2e. They are emitted from certain heavy nuclei, such as Polonium, as they strive to a more stable state, with energies in the MeV (mega-electron-volt) range. They knock electrons out of any atoms near their paths, creating densely ionized paths that can be observed in cloud chambers where they trigger condensation. They exhaust their energy in a few centimeters in air, and in a very short distance in solid materials. They cause ZnS crystals to give a flash of luminescence if they hit them, so they can be observed and counted in a spinthariscope. Modern instrumentation makes their observation and counting much more convenient, but this is all that was available in the early 1900's.
Ernest Rutherford (1871-1937) and his students noted in 1911 that alpha particles passing through very thin gold foils were occasionally scattered through large angles. This is an extraordinary effect, like firing a rifle through a wheat field and having the bullet come back at you. What would be expected were numerous slight deflections by the positive charges distributed through the matter. Electrons were known to be light, and could not produce large deflections, just slight wiggles in the paths (which are observed). To cause large deflections, the positive charge and the mass must be concentrated in very small volumes. Rutherford showed that although atoms have a radius of the order of 10-8 cm, the mass and positive charge are concentrated within a radius of about 10-13 cm. If an atom were the size of the earth, then its nucleus would be a few meters in diameter.
An alpha particle, with charge +2e and mass of 4 amu, would then approach a gold nucleus of charge +79e and mass 197 amu at high velocity, and at an impact parameter of b. Only for small b would there be a considerable deflection, and in this case the electrons could be considered distant and diffuse, with the full nuclear charge effective. The force between charges z and Z is zZe2/4πεr2 in MKSA units, so the trajectory will be a hyperbola. Let K = zZe2/4πεm, where m is the (reduced) mass of the alpha particle. This corresponds to GM = k2 for the gravitational problem. The impact parameter b is just the minor axis of the hyperbola (verify this by drawing a right triangle), while a is determined by the total energy. In fact, a = K/vo2, where vo is the initial velocity. We note that the two branches of a hyperbola correspond to attractive and repulsive orbits.
Knowing a and b, we know the parabola, and can find the angle between the asymptotes, and thus the deflection D. D = π - 2θ, so tan θ = cot (D/2) = bvo2/K. This is the relation between the impact parameter b and the deflection D. The distance of closest approach is q = a (e + 1), where e can be found from a and b. This is how Rutherford determined an upper limit on the size of the nucleus, from the maximum observed deflection of the alpha particles. The solid angle between the axis and the deflection D is Ω = 2π(1 - cos D), while the area within the impact parameter b is A = πb2. Therefore, the area dA for scattering into a solid angle dΩ is given by dA/dΩ = C / sin4(D/2), where C = (zZe2/8πεmvo2)2. This is the famous Rutherford scattering differential cross section, proportional to the inverse fourth power of half the angle of deflection. C is the cross section for scattering directly backwards (D = 180°).
Rutherford's discovery of the nucleus led soon after to Bohr's atom, and from there to quantum mechanics, revealing our modern view of matter. Rutherford received a peerage and a Nobel Prize, which were richly deserved. Discoveries in mathematics and physics may lead to understanding; discoveries in most other sciences lead only to knowledge.
Light that enters a sphere of water and is reflected from the far side to re-emerge in the direction from which it entered is responsible for the lovely phenomenon of the rainbow. The colors arise near a caustic surface created by the fact that there is an angle of minimum deviation. Near this limit there are two beams that interfere to produce a maximum of intensity. Since the angle differs slightly due to the variation of the index of refraction with wavelength, bright colors are seen. Red is on the outside of the rainbow, and blue is on the inside. There may be a secondary rainbow outside the primary one, with a reversed order of colors, which corresponds to two internal reflections. The space between the primary and secondary rainbows is darker than the space outside, since there is no scattered light in this area. Beyond the blue edge of the rainbow supernumerary arcs are often seen, which alternate green and pink. This is just a brief review of the properties of a variable phenomenon. For more information, see a reliable source such as R. A. R. Tricker, Introduction to Meterorological Optics (New York: American Elsevier, 1970).
The primary rainbow angle is about 42°, as shown in the diagram on the right. Any droplet in the cone of angle 42° with vertex at the eye and axis in the solar direction will send color to the eye, whatever its distance may be. This holds for raindrops a and b, as well as for dewdrop c. The rainbow is familiar and is often seen, especially on summer afternoons, but the dewbow is less often noticed. It appears when looking westward over a lawn on a misty morning. The dewdrops give a brilliant reflection in the direction of the antisolar point, where your head casts a shadow, so you can recognize the axis of the cone clearly. This is the heiligenschein, a different, cat's eye, effect that is not related to the dewbow. There may even be a colored glory if there is a mist. The dewbow is seen between the antisolar point and your station, stretching right and left in a curve along the ground. It is the section of the rainbow cone by the earth, and is, therefore, a hyperbola. The secondary rainbow and supernumerary arcs have not been reported in the dewbow, but they certainly exist under the proper conditions, and are something to look for specially. Dew occurs when the surface has cooled by radiation below the temperature of the air, and below the dew point at which the air is saturated by water.
Rainbow phenomena can also be seen in the droplets produced by lawn sprinklers and hose nozzles, or any other source of water droplets. Surface tension produces accurately spherical droplets, especially with small droplets, where gravitational and aerodynamic forces are negligible.
Hyperbolas are not used in surveying for transition curves between two tangents, that might be considered as asymptotes, because they have no definite start or end, and are difficult to compute and lay out. They are also not used for arches or bridges, since they are not as pleasing to the eye as ellipses, circles and parabolas and offer no structural advantages. They do not occur naturally in terrestrial motion or physical processes. For these reasons, hyperbolas are seldom encountered, except as discussed above.
Suppose there are two radio broadcasting stations that emit waves containing accurate timing information. A ship may receive these signals, and note the time displacement between them, which corresponds to a certain distance depending on the propagation speed of the signals. This defines a hyperbola that can be drawn on the map, since the foci are known. If there is a third station, additional hyperbolas can be drawn and the location of the ship determined by the intersection of the curves, with a valuable check since there is more information than the minimum required. LORAN is an example of such a system. Of course, the time of travel of the signal from the broadcasting station can give circles of position that are easier to draw, but the hyperbola method must be used if the times of emission are not known and only differences can be measured.
In the 17th century, there was great interest in improving telescopes, which were severely limited by the spherical aberration of the spherical refracting surfaces available, which meant that they did not bring parallel rays to a point focus. Descartes worked out the forms of surfaces that would bring rays coming from a point source (or infinity) to a point focus; that is, which would provide stigmatic imaging. These were the famous Cartesian Ovals, and included among them were hyperboloidal surfaces. Their form depended on the index of refraction and on the object and image distances, so they were inflexible in application, but worst, they could not be manufactured with sufficient accuracy. For optical work, the surfaces must be correct to less than a wavelength, and this was simply impossible for other than spherical (or cylindrical) surfaces. It is still largely impossible, though good aspheric optical surfaces can be made in certain cases, such as corrector plates for Schmidt and Maksutov telescopes. Approximate shapes are good enough for non-imaging lenses, such as condensers and illumination, and aspheric surfaces are quite popular for these applications, though the surfaces may or may not be hyperboloids. As it happens, there are other problems besides spherical aberration, such as chromatic aberration, and field of view, that cannot be solved with aspheric surfaces, though several coaxial spherical surfaces (which can be very accurately produced) and glasses of different index and dispersion, can solve the problem very well.
Composed by J. B. Calvert
Created 8 May 2002
Last revised 1 January 2005 | http://mysite.du.edu/~jcalvert/math/hyperb.htm | 13 |