score
int64
50
2.08k
text
stringlengths
698
618k
url
stringlengths
16
846
year
int64
13
24
87
A detailed description of the Fourier transform ( FT ) has waited until now, when you have a better appreciation of why it is needed. A Fourier transform is an operation which converts functions from time to frequency domains. An inverse Fourier transform ( IFT ) converts from the frequency domain to the time domain. The concept of a Fourier transform is not that difficult to understand. Recall from Chapter 2 that the Fourier transform is a mathematical technique for converting time domain data to frequency domain data, and vice versa. You may have never thought about this, but the human brain is capable of performing a Fourier transform. Consider the following sine wave and note. A musician with perfect pitch will tell us that this is middle C (261.63 Hz) on the western music scale. This musician will be able to also tell us that this sine wave is the first G above middle C (392 Hz), and that this sine wave this note is a C one octave above middle C (523.25 Hz). Some can tell the notes when more than one are played simultaneously, but this process becomes more difficult when more notes are played simultaneously. Play all of the above notes simultaneously. Can you hear which frequencies are simultaneously being played? The Fourier transform can! Change the relative amplitudes of the notes. Can you determine their relative amplitudes with your ear? The Fourier transform can! The Fourier transform ( FT ) process is like the musician hearing a tone (time domain signal) and determining what note (frequency) is being played. The inverse Fourier Transform ( IFT ) is like the musician seeing notes (frequencies) on a sheet of music and converting them to tones (time domain signals). To begin our detailed description of the FT consider the following. A magnetization vector, starting at +x, is rotating about the Z axis in a clockwise direction. The plot of Mx as a function of time is a cosine wave. Fourier transforming this gives peaks at both +n and -n because the FT can not distinguish between a +n and a -n rotation of the vector from the data supplied. A plot of My as a function of time is a -sine function. Fourier transforming this gives peaks at +n and -n because the FT can not distinguish between a positive vector rotating at +n and a negative vector rotating at -n from the data supplied. The solution is to input both the Mx and My into the FT. The FT is designed to handle two orthogonal input functions called the real and imaginary components. Detecting just the Mx or My component for input into the FT is called linear detection. This was the detection scheme on many older NMR spectrometers and some magnetic resonance imagers. It required the computer to discard half of the frequency domain data. Detection of both Mx and My is called quadrature detection and is the method of detection on modern spectrometers and imagers. It is the method of choice since now the FT can distinguish between +n and -n, and all of the frequency domain data be used. An FT is defined by the integral Think of f() as the overlap of f(t) with a wave of frequency . This is easy to picture by looking at the real part of f() only. Consider the function of time, f( t ) = cos( 4t ) + cos( 9t ). To understand the FT, examine the product of f(t) with cos(wt) for w values between 1 and 10, and then the summation of the values of this product between 1 and 10 seconds. The summation will only be examined for time values between 0 and 10 seconds. The inverse Fourier transform (IFT) is best depicted as an summation of the time domain spectra of frequencies in f(ω). The actual FT will make use of an input consisting of a REAL and an IMAGINARY part. You can think of Mx as the REAL input, and My as the IMAGINARY input. The resultant output of the FT will therefore have a REAL and an IMAGINARY component, too.Consider the following function: In FT NMR spectroscopy, the real output of the FT is taken as the frequency domain spectrum. To see an esthetically pleasing (absorption) frequency domain spectrum, we want to input a cosine function into the real part and a sine function into the imaginary parts of the FT. This is what happens if the cosine part is input as the imaginary and the sine as the real. In an ideal NMR experiment all frequency components contained in the recorded FID have no phase shift. In practice, during a real NMR experiment a phase correction must be applied to either the time or frequency domain spectra to obtain an absorption spectrum as the real output of the FT. This process is equivalent to the coordinate transformation described in Chapter 2 If the above mentioned FID is recorded such that there is a 45o phase shift in the real and imaginary FIDs, the coordinate transformation matrix can be used with = - 45o. The corrected FIDs look like a cosine function in the real and a sine in the imaginary. Fourier transforming the phase corrected FIDs gives an absorption spectrum for the real output of the FT. The phase shift also varies with frequency, so the NMR spectra require both constant and linear corrections to the phasing of the Fourier transformed signal. Constant phase corrections, b, arise from the inability of the spectrometer to detect the exact Mx and My. Linear phase corrections, m, arise from the inability of the spectrometer to detect transverse magnetization starting immediately after the RF pulse. The following drawing depicts the greater loss of phase in a high frequency FID when the initial yellow section is lost. From the practical point of view, the phase correction is applied in the frequency domain rather then in the time domain because we know that a real frequency domain spectrum should be composed of all positive peaks. We can therefore adjust b and m until all positive peaks are seen in the real output of the Fourier transform. In magnetic resonance imaging, the Mx or My signals are rarely displayed. Instead a magnitude signal is used. The magnitude signal is equal to the square root of the sum of the squares of Mx and My. To better understand how FT NMR functions, you need to know some common Fourier pairs. A Fourier pair is two functions, the frequency domain form and the corresponding time domain form. Here are a few Fourier pairs which are useful in MRI. The amplitude of the Fourier pairs has been neglected since it is not relevant in MRI. Constant value at all time Real: cos(2πνt), Imaginary: -sin(2πνt) Comb Function (A series of delta functions separated by T.) Exponential Decay: e-at for t > 0. A square pulse starting at 0 that is T seconds long. To the magnetic resonance scientist, the most important theorem concerning Fourier transforms is the convolution theorem. The convolution theorem says that the FT of a convolution of two functions is proportional to the products of the individual Fourier transforms, and vice versa. If f() = FT( f(t) ) and g() = FT( g(t) ) then f() g() = FT( g(t) f(t) ) and f() g() = FT( g(t) f(t) ) It will be easier to see this with pictures. In the animation window we are trying to find the FT of a sine wave which is turned on and off. The convolution theorem tells us that this is a sinc function at the frequency of the sine wave. Another application of the convolution theorem is in noise reduction. With the convolution theorem it can be seen that the convolution of an NMR spectrum with a Lorentzian function is the same as the Fourier transform of multiplying the time domain signal by an exponentially decaying function. What is the FT of a signal represented by this series of delta functions? The answer will be addressed in the next heading, but first some information on relationships between the sampled time domain data and the resultant frequency domain spectrum. An n point time domain spectrum is sampled at δt and takes a time t to record. The corresponding complex frequency domain spectrum that the discrete FT produces has n points, a width f, and resolution δf. The relationships between the quantities are as follows. δf = (1/t) The wrap around problem or artifact in a magnetic resonance image is the appearance of one side of the imaged object on the opposite side. In terms of a one dimensional frequency domain spectrum, wrap around is the occurrence of a low frequency peak on the wrong side of the spectrum. The convolution theorem can explain why this problem results from sampling the transverse magnetization at too slow a rate. First, observe what the FT of a correctly sampled FID looks like. With quadrature detection, the image width is equal to the inverse of the sampling frequency, or the width of the green box in the animation window. When the sampling frequency is less than the spectral width or bandwidth, wrap around occurs. The two-dimensional Fourier transform (2-DFT) is an FT performed on a two dimensional array of data. Consider the two-dimensional array of data depicted in the animation window. This data has a t' and a t" dimension. A FT is first performed on the data in one dimension and then in the second. The first set of Fourier transforms are performed in the t' dimension to yield an ν' by t" set of data. The second set of Fourier transforms is performed in the t" dimension to yield an ν' by ν" set of data. The 2-DFT is required to perform state-of-the-art MRI. In MRI, data is collected in the equivalent of the t' and t" dimensions, called k-space. This raw data is Fourier transformed to yield the image which is the equivalent of the ν' by ν" data described above. Copyright © 1996-2010 J.P. Hornak. All Rights Reserved.
http://www.cis.rit.edu/htbooks/mri/chap-5/chap-5.htm
13
64
Ohio Standards Alignment - Angles in radian measure - Definitions of trigonometric ratios Review of right triangle trigonometry, unit circle definition of trigonometric functions, radian measure, graphs of trigonometric functions Students compare trigonometric ratios for a unit and non-unit circle with the goal of observing that the ratios are independent of the radius. They thus see the benefit of using a radius of one unit. Using the unit circle, students generate a trig table for angles in Quadrants I and II by connecting the x-coordinate to the cosine ratio and the y-coordinate to the sine ratio. Through pattern recognition the table is extended to Quadrants III and IV. The table is used to graph the three basic trigonometric functions -- sine, cosine, and tangent. This problem is appropriate for use after learning radian measure and as an opening activity to the unit circle. Looking at the diagram of the unit circle, angle DGO is a right angle. The radius of the circle is 1 unit. With a protractor measure angle DOG. Label the measures of segments OG, DG and OD. Find the sine ratio of angle DOG. Find the cosine ratio of angle DOG. What do you notice about the sine and cosine of angle DOG? Distribute the student worksheet. Ask students to complete Questions 1, 2, 3 and 4. A whole-class discussion should follow from Questions 3 and 4. Students should observe, when using the unit circle, the sine ratio is the value of the y-coordinate and the cosine ratio is the value of the x-coordinate (as the denominators have a value of 1). Moreover, the measure of the intercepted arc is the radian measure of the angle. Students should then complete Questions 5 and 6. Have students discuss conjectures about patterns found in the table. What is the advantage of using a circle that has a radius of 1 unit? Without the benefit of drawing the angles on the unit circle, students should use observed patterns to complete Question 7. Is the sine ratio always defined? Is the cosine ratio always defined? Is the tangent ratio always defined? Questions 8 - 15 are follow-ups that lead to the graphs of the three basic trig functions. In which quadrants is the sine ratio negative? In which quadrants is the cosine ratio negative? In which quadrants is the tangent ratio negative? What is the domain for each of the 3 trig functions? What is the range for each of the 3 trig functions? Which graphs are continuous and why? When looking for patterns, prompts may be given such as: Find a relationship between the sine and cosine ratios. Is the rate of change for the individual ratios constant? When are the ratios increasing, decreasing, etc? Students working in groups can average the ratios they find to get better estimates. A discussion point can be that quadrantal angles do not create right triangles, but their cosine and sine ratios can be defined from the x and y-coordinates. Students may need some help understanding that, while the angles are defined relative to the x-axis, the measurement of all angles is from the positive x-axis counterclockwise, thus allowing for angle measurements over 90°. Students can check the accuracy of ratios using their calculators and also the graphs they obtain. From the teaching files of Teresa Graham.
http://www.ohiorc.org/pm/math/richproblemmath.aspx?pmrid=63
13
88
Maxima and minima In mathematics, the maximum and minimum (plural: maxima and minima) of a function, known collectively as extrema (singular: extremum), are the largest and smallest value that the function takes at a point either within a given neighborhood (local or relative extremum) or on the function domain in its entirety (global or absolute extremum). Pierre de Fermat was one of the first mathematicians to propose a general technique (called adequality) for finding maxima and minima. More generally, the maximum and minimum of a set (as defined in set theory) are the greatest and least element in the set. Unbounded infinite sets such as the set of real numbers have no minimum and maximum. To locate extreme values is the basic objective of optimization. Analytical definition A real-valued function f defined on a real line is said to have a local (or relative) maximum point at the point x∗, if there exists some ε > 0 such that f(x∗) ≥ f(x) when |x − x∗| < ε. The value of the function at this point is called maximum of the function. Similarly, a function has a local minimum point at x∗, if f(x∗) ≤ f(x) when |x − x∗| < ε. The value of the function at this point is called minimum of the function. A function has a global (or absolute) maximum point at x∗ if f(x∗) ≥ f(x) for all x. Similarly, a function has a global (or absolute) minimum point at x∗ if f(x∗) ≤ f(x) for all x. The global maximum and global minimum points are also known as the arg max and arg min: the argument (input) at which the maximum (respectively, minimum) occurs. Restricted domains: There may be maxima and minima for a function whose domain does not include all real numbers. A real-valued function, whose domain is any set, can have a global maximum and minimum. There may also be local maxima and local minima points, but only at points of the domain set where the concept of neighborhood is defined. A neighborhood plays the role of the set of x such that |x − x∗| < ε. A continuous (real-valued) function on a compact set always takes maximum and minimum values on that set. An important example is a function whose domain is a closed (and bounded) interval of real numbers (see the graph above). The neighborhood requirement precludes a local maximum or minimum at an endpoint of an interval. However, an endpoint may still be a global maximum or minimum. Thus it is not always true, for finite domains, that a global maximum (minimum) must also be a local maximum (minimum). Finding functional maxima and minima Finding global maxima and minima is the goal of mathematical optimization. If a function is continuous on a closed interval, then by the extreme value theorem global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum (or minimum) is to look at all the local maxima (or minima) in the interior, and also look at the maxima (or minima) of the points on the boundary; and take the biggest (or smallest) one. Local extrema can be found by Fermat's theorem, which states that they must occur at critical points. One can distinguish whether a critical point is a local maximum or local minimum by using the first derivative test or second derivative test. For any function that is defined piecewise, one finds a maximum (or minimum) by finding the maximum (or minimum) of each piece separately; and then seeing which one is biggest (or smallest). - The function x2 has a unique global minimum at x = 0. - The function x3 has no global minima or maxima. Although the first derivative (3x2) is 0 at x = 0, this is an inflection point. - The function has a unique global maximum at x = e. (See figure at right) - The function x-x has a unique global maximum over the positive real numbers at x = 1/e. - The function x3/3 − x has first derivative x2 − 1 and second derivative 2x. Setting the first derivative to 0 and solving for x gives stationary points at −1 and +1. From the sign of the second derivative we can see that −1 is a local maximum and +1 is a local minimum. Note that this function has no global maximum or minimum. - The function |x| has a global minimum at x = 0 that cannot be found by taking derivatives, because the derivative does not exist at x = 0. - The function cos(x) has infinitely many global maxima at 0, ±2π, ±4π, …, and infinitely many global minima at ±π, ±3π, …. - The function 2 cos(x) − x has infinitely many local maxima and minima, but no global maximum or minimum. - The function cos(3πx)/x with 0.1 ≤ x ≤ 1.1 has a global maximum at x = 0.1 (a boundary), a global minimum near x = 0.3, a local maximum near x = 0.6, and a local minimum near x = 1.0. (See figure at top of page.) - The function x3 + 3x2 − 2x + 1 defined over the closed interval (segment) [−4,2] has two extrema: one local maximum at x = −1−√15⁄3, one local minimum at x = −1+√15⁄3, a global maximum at x = 2 and a global minimum at x = −4. Functions of more than one variable For functions of more than one variable, similar conditions apply. For example, in the (enlargeable) figure at the right, the necessary conditions for a local maximum are similar to those of a function with only one variable. The first partial derivatives as to z (the variable to be maximized) are zero at the maximum (the glowing dot on top in the figure). The second partial derivatives are negative. These are only necessary, not sufficient, conditions for a local maximum because of the possibility of a saddle point. For use of these conditions to solve for a maximum, the function z must also be differentiable throughout. The second partial derivative test can help classify the point as a relative maximum or relative minimum. In contrast, there are substantial differences between functions of one variable and functions of more than one variable in the identification of global extrema. For example, if a bounded differentiable function f defined on a closed interval in the real line has a single critical point, which is a local minimum, then it is also a global minimum (use the intermediate value theorem and Rolle's theorem to prove this by reductio ad absurdum). In two and more dimensions, this argument fails, as the function shows. Its only critical point is at (0,0), which is a local minimum with ƒ(0,0) = 0. However, it cannot be a global one, because ƒ(4,1) = −11. In relation to sets Maxima and minima are more generally defined for sets. In general, if an ordered set S has a greatest element m, m is a maximal element. Furthermore, if S is a subset of an ordered set T and m is the greatest element of S with respect to order induced by T, m is a least upper bound of S in T. The similar result holds for least element, minimal element and greatest lower bound. In the case of a general partial order, the least element (smaller than all other) should not be confused with a minimal element (nothing is smaller). Likewise, a greatest element of a partially ordered set (poset) is an upper bound of the set which is contained within the set, whereas a maximal element m of a poset A is an element of A such that if m ≤ b (for any b in A) then m = b. Any least element or greatest element of a poset is unique, but a poset can have several minimal or maximal elements. If a poset has more than one maximal element, then these elements will not be mutually comparable. In a totally ordered set, or chain, all elements are mutually comparable, so such a set can have at most one minimal element and at most one maximal element. Then, due to mutual comparability, the minimal element will also be the least element and the maximal element will also be the greatest element. Thus in a totally ordered set we can simply use the terms minimum and maximum. If a chain is finite then it will always have a maximum and a minimum. If a chain is infinite then it need not have a maximum or a minimum. For example, the set of natural numbers has no maximum, though it has a minimum. If an infinite chain S is bounded, then the closure Cl(S) of the set occasionally has a minimum and a maximum, in such case they are called the greatest lower bound and the least upper bound of the set S, respectively.. See also |Look up maxima, minima, or extremum in Wiktionary, the free dictionary.| - First derivative test - Second derivative test - Higher-order derivative test - Limit superior and limit inferior - Mechanical equilibrium - Sample maximum and minimum - Saddle point - Arg min, Arg max - Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 0-495-01166-5. - Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 0-547-16702-4. - Thomas, George B.; Weir, Maurice D.; Hass, Joel (2010). Thomas' Calculus: Early Transcendentals (12th ed.). Addison-Wesley. ISBN 0-321-58876-2. - Maxima and Minima From MathWorld--A Wolfram Web Resource. - Thomas Simpson's work on Maxima and Minima at Convergence - Application of Maxima and Minima with sub pages of solved problems
http://en.wikipedia.org/wiki/Local_extremum
13
87
Enthalpies of Solution1 Authors: B. D. Lamp, T. Humphry, V. M. Pultz and J. M. McCormick* Last Update: March 20, 2013 Thermochemistry investigates the relationship between chemical reactions and energy changes involving heat. It was born out of the practical problem of cannon making and today continues to play an important role in almost every facet of chemistry. Practical applications of thermochemistry include the development of alternative fuel sources, such as fuel cells, hybrid gas-electric cars or gasoline supplemented with ethanol. On a fundamental level, thermochemistry is also important because the forces holding molecules or ionic compounds together are related to the heat evolved or absorbed in a chemical reaction. Therefore, chemists are interested in the thermochemistry of every chemical reaction, whether it be the solubility of lead salts in drinking water or the metabolism of glucose. The amount of heat generated or absorbed in a chemical reaction can be studied using a calorimeter. A simplified schematic of a calorimeter is shown in Fig. 1. The "system" (our chemical reaction) is placed in a well-insulated vessel surrounded by water (surroundings). A thermometer is used to measure the heat transferred to or from the system to the surroundings. Ideally, only the water would be the "surroundings" in the thermodynamic sense, and the vessel would not allow heat to pass. In reality, the vessel does allow heat to pass from the water to the rest of the universe, and we will need to account for that Figure 1. Schematic representation of a calorimeter. There are two types of calorimeters: constant-pressure and constant-volume.2,3 In the constant-volume calorimeter, the chemical reaction under study is allowed to take place in a heavy-walled vessel called a bomb. Because the reaction in the bomb takes place at constant volume, the heat that is generated by the reaction (mostly exothermic reactions are studied in a constant volume calorimeter) is actually the change in the internal energy (ΔU) for the reaction. Although ΔU is a useful quantity, for chemists the change enthalpy (ΔH) is more relevant. However, we can convert ΔU to ΔH using Eqn. 1, if we know the change in the number of moles of gas (Δn) in the reaction and the temperature (T). In a constant-pressure calorimetry experiment, like the one that you will be performing, the energy released or absorbed is measured under constant atmospheric pressure. A constant-pressure calorimeter is simpler to assemble than a constant-volume calorimeter and a wider range of chemical reactions can be studied with it. Also, because the reaction is run at constant pressure, ΔH is equal to the amount of heat a reaction generates or absorbs (More Info) and one need only measure the temperature change when the reactants are mixed to obtain ΔH for the reaction. Constant-pressure calorimetry is normally conducted with liquids or solutions that have the same temperature (More Info). When a solid is used, it is usually assumed that the solid's is the same as the ambient temperature. After the measurement is made, the reactants are quickly placed into the constant-pressure calorimeter. If the reactants mix and react instantaneously, and the thermometer responds perfectly to the change in temperature, the change in the temperature (ΔT) would simply be as shown in Fig. 2. Note that if the calorimeter is perfect (no heat leaks) the temperature inside the calorimeter will not change, and the graph of temperature as a function of time will be flat, also as shown in Fig. 2. Figure 2. Graph of temperature as a function of time for an exothermic reaction in a perfect calorimeter. Unfortunately, no calorimeter is perfect, and instantaneous mixing and reaction are not always achieved (even with efficient mixing). In this case, the graph of temperature as a function of time looks more like Fig. 3. We can still find ΔT, but now we must extrapolate back to when the solutions were mixed (time, t, equals zero). This is most easily done by performing a linear regression on the sloped portion of the graph (where, for exothermic reactions, heat is leaking out of the calorimeter) and obtaining from the y-intercept. Figure 3. Graph of temperature as a function of time for an exothermic reaction in a real calorimeter showing extrapolation back to the ideal Tfinal at the time of mixing (t = 0). Some other experimental problems with real calorimeters that we need to account for are: 1) real calorimeters can absorb heat, and 2) although the species that undergo the chemical change result in a release/absorption of thermal energy, it is the entire solution that changes its temperature. Luckily both of these problems can be accounted for by measuring a constant, C, which is essentially a specific heat capacity for the calorimeter and everything in it (with units of J·g-1·°C-1). As long as we work with dilute aqueous solutions and the nature of the solutions does not change significantly from one experiment to another (e.g., the solutions are all dilute and aqueous), the calorimeter constant may be used for many different experiments in the same calorimeter. The calorimeter constant is most easily determined by performing a reaction with a known enthalpy change (ΔHrxn). For this exercise we will use the neutralization reaction HCl (aq) + NaOH (aq) → H2O (l) + NaCl (aq) to determine the calorimeter constant (Help To relate ΔHrxn to the calorimeter's temperature change, we need to use the First Law of Thermodynamics. The heat that the chemical reaction puts out, or takes up, (qrxn) is simply the moles of the limiting reagent, nlimiting reagent times (recall that this is how an enthalpy change was defined), as given by Eqn. 2. The solution (including the reactants and the products) and the calorimeter itself do not undergo a physical or chemical change, so we need to use the expression for specific heat capacity to relate their change in temperature to the amount of heat (qcal) that they have exchanged (Eqn. 3). In Eqn. 3, m is the mass (mass of the reactants + mass of water + mass of calorimeter), C is the calorimeter constant (specific heat capacity) and ΔT is the change in the temperature of the solution (and calorimeter). By the First Law of Thermodynamics, qrxn must be equal in magnitude to qcal, but opposite in sign (if the reaction gives off heat, the calorimeter must take it in). This leads to Eqn. 4, which is the starting point for all of the calculations in this exercise. It is then simply a matter of algebraic manipulation to put it in the form that we need (either solve Eqn. 4 for C, when we are determining the calorimeter constant, or for ΔH when we are trying to find the enthalpy change for a salt dissolving in water, ΔHsol). reagent·ΔH = -m·C·ΔT Trends related to the positions of the elements on the periodic table are a well-established fact. There are vertical trends, horizontal trends, and some properties can trend both ways. Atomic size is an example of a trend. Generally, the lower in a group an element is, the larger it is, so a potassium atom is larger than a sodium atom. Also, the farther right in a row an atom is, the smaller it tends to be, so a carbon atom is smaller than a sodium atom. In this exercise, calorimetry will be used to investigate whether there is a periodic trend in the enthalpies of formation for the common cations of some metallic elements in aqueous solution. The formation of an aqueous cation from an element in its standard state is a fairly abstract multi-step process, but it relates directly to the oxidation-reduction reactivity of the element and to the solubility of ionic compounds. So, this is an important chemical process! Each salt investigated in the lab has a metallic cation. Using the known ΔHf0 of the solid inorganic salt (including any waters of hydration) and the known ΔHf0 of the aqueous anion (from the following table), the ΔHf0 of the aqueous cation can be calculated using the dissolution equation of the salt and the enthalpy of dissolution measured in the experiment. For example, the dissolution equation for aluminum chloride hexahydrate is AlCl3·6H2O (s) → Al3+ (aq) + 3 Cl- (aq) + 6 H2O (l). of the Al3+ ion can, therefore, be found from the ΔHf0 of the other species present in the reaction and the ΔHsoln found experimentally. Note that in this example the water changes from being bound in the aluminum chloride crystal lattice to being free liquid water. There is an enthalpy change associated with this process! So, it is important to know whether the solid salt is a hydrate or not, and if so, how many waters are present. If a periodic trend in the enthalpy of formation of the aqueous cation is present down a column or across a row, it should become apparent from the results. Any trend in ΔHf will be revealed by arranging the class results in order of magnitude and seeing if the ordering follows the periodic table. If no trend is present, that should also be readily apparent. Before coming to the laboratory be sure that you have determined for the reaction of aqueous HCl with aqueous NaOH using the tabulated ΔHf0 (you might find this reaction's net ionic equation, H+ (aq) + OH- (aq) → H2O (l), an easier way to calculate ΔHrxn). It is highly advised that you have set up all of the equations that you will need during the laboratory in your notebook before lab. The main cause of people not finishing this exercise on time is being ill-prepared! In this experiment, you will use a computer-based data collection system to record solution temperature as a function of time, and a magnetic stirrer to ensure mixing of reagents (see experimental setup in Fig. 4 and Fig. 5). Before beginning, read the introduction to Logger Pro to learn how to assemble the computer and data acquisition system. You will be using stainless steel temperature probes; one in channel 1 of the LabPro interface and the other in channel 2. Set the software to collect data every second for 4 minutes (240 sec) and adjust the displayed precision to two decimal places (Help Me). Your instructor will assist you in setting up the data collection system and running the software. Figure 4. LabPro setup for this experiment showing temperature probes on both channel 1 and channel 2. Figure 5. Experimental setup of the constant-pressure calorimeter (shown without the cover in place). Determination of the Calorimeter Constant Measure and record the mass of a clean, dry Styrofoam cup. Place a dry magnetic stir bar in the cup and record the new mass. This cup will be your calorimeter for the day. Do not change cups! Otherwise, you will need to re-determine the calorimeter constant for the new cup. Measure 50.0 mL of ~2 M NaOH with your graduated cylinder and place it into the cup. Assuming the solution has a density of 1.00 g/mL, determine the mass of the solution. Record the mass of the cup and the solution it contains in your notebook. Do NOT place a wet cup or a cup filled with liquid on the balance! Helpful hint: use the density and the volume to calculate the mass. Doing so can cause severe damage to the balance. Be sure that you also record the molarity of the NaOH used in your notebook. Calculate and record the number of moles of NaOH used. Place 51.0 mL of ~2 M HCl in another clean, dry coffee cup. Again, assuming the density of the HCl solution is 1.00 g/mL, determine the mass of the solution that was used. Do NOT place a wet cup or a cup filled with liquid on the balance! Rather, use the solution's density and volume to find mass. Record the molarity of the HCl used. Calculate and record the number of moles of HCl used. Determine whether NaOH or HCl is the limiting Assemble the calorimeter apparatus, as shown in Fig. 5, by positioning the cup containing the NaOH solution and stir bar on the magnetic stirrer. Your instructor will assist you in positioning the cover, if needed. Begin gently stirring the solution (a setting of 1 or 2 on the magnetic stirrer is a good starting point). Rinse the channel 1 temperature probe with distilled water into a beaker and pat dry with a KimWipe®. Place the probe in the cup being careful that the stir bar does not strike the probe. Gently clamp the temperature probe in place. Rinse the channel 2 temperature probe and then place it in the HCl solution. CAUTION! The temperature probe should not sit in the HCl solution for longer than one minute. If the probe stays in an acidic solution any longer than this, the steel will be irrevocably corroded. The LoggerPro software will display the temperature of both solutions in real-time in the upper left-hand corner of the window. Monitor the temperatures over the next several minutes. While the temperatures are equilibrating, make sure that the LoggerPro software is ready to start recording data. When the temperatures of the NaOH and HCl solutions no longer change, record the temperature of each. Calculate the average of the two temperatures, which will be Tinitial of the mixture. Remove the probe from the HCl solution and rinse it well with distilled water into a waste beaker. Move the cover to the side and then rapidly, but carefully, pour as much of the HCl solution as possible into the calorimeter and simultaneously initiate data collection in LoggerPro. Slide the cover back into place. While continuing to stir, record the solution's temperature every second over the next 4 minutes. By default, LoggerPro will construct a graph of temperature versus time as your data is being collected. The program will collect data from both probes, but only the channel 1 probe will change, and it will be the only one that we will analyze. Note that for clarity the signal from channel 2 has been omitted in all figures shown below. Figure 6. Typical trace of temperature as a function of time for an exothermic as recorded by the LoggerPro software. Note that data from only one channel is shown. The trace shown in Fig. 6 is fairly typical for an exothermic process, where the temperature of the solution rises rapidly before slowly diminishing as the system returns to room temperature. Since the temperature probe cannot respond instantaneously to a rapid change in temperature and the reaction may not take place instantaneously, the first portion of the data may exhibit some curvature before reaching a maximum. However, the data to the right of the curve's maximum should be fairly linear. Use the linear fit icon () to draw the best-fit line extending it back to the time of mixing, i.e., time = 0 min (see Fig. 7). The ideal final temperature of the mixture, is the temperature given by the best-fit line at the time of mixing. If you determine a best-fit linear line based on data to the right of the curve's maximum, the intercept is Tfinal. Calculate the ideal temperature change ΔT = Note that if two channels are being monitored, you will be prompted to specify which channel to analyze (select channel 1 if you have set up the experiment as Figure 7. Same data as shown in Fig. 6, but now with the results of the linear regression shown. As in Fig. 6, data from only one channel is shown. Determine the total mass of the calorimeter, m (includes the mass of the cup and everything in it), by adding the mass of the dry cup and stir bar, the mass of HCl and the mass of NaOH . Using the total mass, ΔHrxn, moles of the limiting reagent, and ΔT calculate the specific heat capacity of the calorimeter, C. Use the Store Latest Run command in LoggerPro to prevent overwriting of your data. This will write the file that only LoggerPro can read. To save your file in a format that Excel can read. Do this by selecting File, Export As from the menu bar and then select the Text option. Save your data to your Y: drive, or other removable data storage device. Record the file name in your Repeat the above procedure twice more. Calculate an average specific heat capacity of the calorimeter and its associated 95% confidence interval. Determination of a Heat of Solution In this portion of the experiment, you will use the calorimeter from the previous portion to determine the heat of solution (ΔHsoln) for an inorganic salt. Your specific salt will be assigned by your instructor in the laboratory; all measurements are to be conducted using your assigned salt. Since you will not be using the second temperature probe, you can disconnect it. Clean and dry the coffee cup that you used for the calorimeter in the first part. Place 50.0 mL of distilled H2O in the cup. Assuming the water has a density of 1.00 g/mL, determine the mass of distilled water used. Do NOT place a wet cup or a cup filled with liquid on the balance! Assemble the calorimeter apparatus, insert the magnetic stir bar and begin gentle stirring. Rinse the channel 1 temperature probe with distilled water and pat dry. Place the probe in the water, as you did before, and note the temperature of the water over the next several minutes. When the temperature no longer changes, record it as Tinitial. Grind your assigned salt to a fine powder with a clean, dry mortar and pestle. Place approximately 3.0 g of the powdered salt into a clean, dry weigh boat and record the mass. The salt should be at room temperature, which we will assume is the same as the temperature of the water. Begin stirring the water in the calorimeter. This should be fairly vigorous, but not so vigorous that water splashes out of the calorimeter or there is excessive cavitation in the water. Slide the cover out of the way, initiate data collection and then rapidly, but carefully, add the salt to the stirring water in the calorimeter. Slide the cover back over the cup's mouth. While continuing to stir, record the solution's temperature every few seconds over the next 15 minutes. The time required to obtain the maximum/minimum temperature may be as short as 5 minutes and as long as 40 minutes (if the sample was not ground finely enough); adjust your acquisition parameters as required. LoggerPro will again construct a graph of temperature versus time based on your data. The appearance of your data will depend on how exothermic or endothermic the dissolution of your salt is. As with the HCl/NaOH data, draw the best-fit line through the data points which are approaching room temperature. The ideal final temperature of the mixture, Tfinal, is the temperature where the best-fit line crosses the time of mixing. If your data looks really strange, you might approximate Tfinal by the lowest temperature, for an endothermic reaction, or the highest temperature, for an exothermic reaction, that is achieved. Calculate ΔT. Using the total mass of the solution (mass of cup and stir bar from first part, mass of water added and mass of salt) m, the number of moles of solute, and the previously established specific heat capacity of the calorimeter, calculate the heat of solution, ΔHsoln for your salt. Store the latest run and repeat the analysis of your salt two additional times (don't forget to save your data!). Calculate the average ΔHsoln, with its associated 95% confidence interval for your salt. Before you leave the laboratory, report your results to the rest of the class. Copy one run each for the HCl/NaOH and ΔHsoln portions of the experiment into Excel and include a printout of a plot of each dataset in your notebook. Results and Analysis Determination of the Calorimeter Constant Determine the average C for your calorimeter from your three runs. Determine the estimated standard deviation and the 95% confidence interval for C. You will use the average C in your calculation of but we will not do a propagation of error analysis. Determination of a Heat of Solution From your three runs determine an average ΔHsoln for your salt. Also calculate the estimated standard deviation and the 95% confidence interval for ΔHsoln. Report your its 95% confidence interval to the class. From your ΔHsoln and tabulated ΔHf0, determine for the cation in the salts. Be careful how you write the reaction that describes the salt dissolving (hydrates are different than anhydrous salts!). There is no need to propagate the uncertainty here (so there will be no confidence interval on ΔHf0 for the cations). For your conclusions use the outline for a measurement exercise. Examine class data as a whole. Do you see any trends (for example, how does ΔHf0 for the cations change as a function of an element's place on the periodic table)? Model your Summary Table after Table 1, below. Table 1. Example of the Summary Table for this exercise. Fill in your values, and remember to include the 95% confidence interval for each ΔHsoln. - 1. Click here to obtain this file in PDF format (link not yet active). - 2. Zumdahl, S. S. Chemical Principles, 4th Ed.; Houghton-Mifflin: New York, 2002; chapter 9. - 3. Atkins, P. Physical Chemistry, 6th Ed.; W. H. Freeman: New York, 1998; chapters 2 and 3.
http://chemlab.truman.edu/CHEM130Labs/Calorimetry.asp
13
107
Coordinate geometry is geometry dealing primarily with the line graphs and the (x, y) coordinate plane. The ACT Math Test includes nine questions on coordinate geometry. The topics you need to know are: Number Lines and Inequalities - The (x,y) Coordinate and Perpendicular Lines Equation of a Line Most of the questions on coordinate geometry focus on slope. About two questions on each test will cover number lines and inequalities. The other topics are usually covered by just one question, if they are covered at all. Number Lines and Inequalities Number line questions generally ask you to graph inequalities. A typical number line graph question will ask you: is the graph of the solution set for 2(x + 5) > To answer this question, you first must solve for x. Divide both sides by 2 to get: x + 5 > 2 - Subtract 5 from both sides to get: x > –3 you match x > –3 to its line graph: The circles at the endpoints of a line indicating an inequality are very important when trying to match an inequality to a line graph. An open circle at –3 denotes that x is greater than but not equal to –3. A closed circle would have indicated that x is greater than or equal to –3. For the solution set –3 < x < 3, where x must be greater than –3 and less than 3, the line graph looks like this: The (x,y) Coordinate Plane The (x,y) coordinate plane is described by two perpendicular lines, the x-axis and the y-axis. The intersection of these axes is called the origin. The location of any other point on the plane (which extends in all directions without limit) can be described by a pair of coordinates. Here is a figure of the coordinate plane with a few points drawn in and labeled with their coordinates: As you can see from the figure, each of the points on the coordinate plane receives a pair of coordinates: (x,y). The first coordinate in a coordinate pair is called the x-coordinate. The x-coordinate of a point is its location along the x-axis and can be determined by the point’s distance from the y-axis (x = 0 at the y-axis). If the point is to the right of the y-axis, its x-coordinate is positive, and if the point is to the left of the y-axis, its x-coordinate is negative. The second coordinate in a coordinate pair is the y-coordinate. The y-coordinate of a point is its location along the y-axis and can be calculated as the distance from that point to the x-axis. If the point is above the x-axis, its y-coordinate is positive; if the point is below the x-axis, its y-coordinate The ACT often tests your understanding of the coordinate plane and coordinates by telling you the coordinates of the vertices of a defined geometric shape like a square, and asking you to pick the coordinates of the last vertex. For example: the standard (x,y) coordinate plane, 3 corners of a square are (2,–2), (–2,–2), and (–2,2). What are the coordinates of the square’s fourth corner? The best way to solve this sort of problem is to draw a quick sketch of the coordinate plane and the coordinates given. You’ll then be able to see the shape described and pick out the coordinates of the final vertex from the image. In this case, the sketch would look like this: A square is the easiest geometric shape which a question might concern. It is possible that you will be asked to deal with rectangles or right triangles. The method for any geometric shape is the same, though. Sketch it out so you can see it. The ACT occasionally asks test takers to measure the distance between two points on the coordinate plane. Luckily, measuring distance in the coordinate plane is made easy thanks to the Pythagorean theorem. If you are given two points, distance will always be given by the following formula: The distance between two points can be represented by the hypotenuse of a right triangle whose legs are of lengths following diagram shows how the formula is based on the Pythagorean theorem (see p. ). Here’s a sample problem: the distance between (4,–3) and (–3,8). To solve this problem, just plug the proper numbers into the distance formula: The distance between the points is equals approximately 13.04 Like finding the distance between two points, the midpoint between two points in the coordinate plane can be calculated using a formula. If the endpoints of a line segment are the midpoint of the line segment is: In other words, the x- and y-coordinates of the midpoint are the averages of the x- and y-coordinates of the endpoints. Here is a practice question: is the midpoint of the line segment whose endpoints are (6,0) and The slope of a line is a measurement of how steeply the line climbs or falls as it moves from left to right. More technically, the slope is a line’s vertical change divided by its horizontal change, also known as “rise over run.” Given two points on a line, slope of that line can be calculated using the following formula: The variable most often used to represent slope is m. So, for example, the slope of a line that contains the points (–2,–4) and (6,1) is: Positive and Negative Slopes You can easily determine whether the slope of a line is positive or negative just by looking at the line. If a line slopes uphill as you trace it from left to right, the slope is positive. If a line slopes downhill as you trace it from left to right, the slope is negative. You can determine the relative magnitude of the slope by the steepness of the line. The steeper the line, the more the “rise” will exceed the “run,” and the larger consequently, the slope will be. Conversely, the flatter the line, the smaller the slope will be. For practice, look at the lines in the figure below and try to determine whether their slopes are positive or negative and which have greater relative slopes: Lines l and m have positive slopes, and lines n and o have negative slope. In terms of slope magnitude, line l > m > n > o. It can be helpful to recognize a few slopes by sight. - A line that is horizontal has a slope of 0. Since there is no “rise,” and thus - A line that is vertical has an undefined slope. In this case, there is no “run,” and Thus and any fraction with 0 in its denominator is, by definition, undefined. - A line that makes a angle with a horizontal has a slope of 1 or –1. This makes sense because the “rise” equals the “run,” and or has slope 0 it is horizontal. Line b has slope –1 it makes a angle with the horizontal and slopes downward as you move from left to right. Line c because it makes a with the horizontal and slopes upward as you move from left to right. has undefined slope because it is vertical. Parallel and Perpendicular Lines Parallel lines are lines that don’t intersect. In other words, parallel lines are lines that share the exact same slope. Perpendicular lines are lines that intersect at a right angle (or 90%). In coordinate geometry, perpendicular lines have negative reciprocal slopes. That is, a line with slope m is perpendicular to a line with a slope of –1/m. In the figure below are three lines. Lines q and r both have a slope of 2, so they are parallel. Line s is perpendicular to both lines q and r, and thus has a slope of –1/2. On the ACT, never assume that two lines are parallel or perpendicular just because they look that way in a diagram. If the lines are parallel or perpendicular, the ACT will tell you so. (Perpendicular lines can be indicated by a little square located at the place of intersection, as in the diagram above.) Equation of a Line We’ve already shown you how to find the slope of a line using two points on the line. It is also possible to find the slope of a line using the equation of the line. In addition, the equation of a line can help you find the x- and y-intercepts of the line, which are the locations where the line intersects with the x- and y-axes. This equation for a line is called the slope-intercept form: where m is the slope of the line, and b is the y-intercept of the line. Finding the Slope Using the Slope-Intercept Form If you are given the equation of a line that matches the slope-intercept form, you immediately know that the slope is equal to the value of m. However, it is more likely that the ACT will give you an equation for a line that doesn’t exactly match the slope-intercept form and ask you to calculate the slope. In this case, you will have to manipulate the given equation until it resembles the slope-intercept form. For example, is the slope of the line defined by the equation 5x + 3y = 6? To answer this question, isolate the y so that the equation fits the slope-intercept form. The slope of the line is –5/3. Finding the Intercepts Using the Slope-Intercept The y-intercept of a line is the y-coordinate of the point at which the line intersects the y-axis. Likewise, the x-intercept of a line is the x-coordinate of the point at which the line intersects the x-axis. In order to find the y-intercept, simply set x = 0 and solve for the value of y. To find the x-intercept, set y = 0 and solve for x. To sketch a line given in slope-intercept form, first plot the y-intercept, and then use the slope of the line to plot another point. Connect the two points to form your line. In the figure below, the line y = –2x + 3 is graphed. Since the slope is equal to –2, the line descends two units for every one unit it moves in the positive x direction. The y-intercept is at 3, so the line crosses the y-axis at (0,3). For the ACT Math Test, you should know how the graphs of certain equations look. The two equations that are most important in terms of graphing are If you add lesser-degree terms to the equations, these graphs will shift around the origin but retain their basic shape. You should also keep in mind what the negatives of these equations Occasionally, the ACT will test your knowledge of parabolas, circles, or ellipses. These topics do not regularly appear on the ACT, but it still pays to prepare: if these topics do appear, getting them right can separate you from the crowd. A parabola is a “U”-shaped curve that can open either upward or downward. A parabola is the graph of a quadratic function, which, you may recall, follows the form The equation of a parabola gives you quite a bit of information about the parabola. The vertex of the parabola is axis of symmetry of the parabola is the line parabola opens upward if a > 0, and downward if a < - The y-intercept is the point (0, c). A circle is the collection of points equidistant from a given point, called the center of the circle. Circles are defined by the formula: where (h,k) is the center of the circle, and r is the radius. Note that when the circle is centered at the origin, h = k = 0, so the equation simplifies to: That’s it. That’s all you need to know about circles in coordinate geometry. Once you know and understand this equation, you should be able to sketch a circle in its proper place on the coordinate system if given its equation. You should also be able to figure out the equation of a circle given a picture of its graph with coordinates labeled. An ellipse is a figure shaped like an oval. It looks like a circle somebody sat on, but it is actually a good deal more complicated than a circle, as you can see from all the jargon on the diagram The two foci are crucial to the definition of an ellipse. The sum of the distances from the foci to any point on the ellipse is constant. To understand this visually, look at the figure below. is constant for each point on the ellipse. The line segment containing the foci of an ellipse with both endpoints on the ellipse is called the major axis. The endpoints of the major axis are called the vertices. The line segment perpendicularly bisecting the major axis with both endpoints on the ellipse is the minor axis. The point midway between the foci is the center of the ellipse. When you see an ellipse, you should be able to identify where each of these components would be. The equation of an ellipse is: where a, b, h, and k are constants. With respect to this formula, remember that: The center of the ellipse is (h,k). length of the horizontal axis is 2a. length of the vertical axis is 2b. - If a > b, the major axis is horizontal and the minor axis is vertical; if b > a, the major axis is vertical and the minor axis is horizontal.
http://www.sparknotes.com/testprep/books/act/chapter10section5.rhtml
13
116
From Math Images - The animation shows a three-dimensional projection of a rotating tesseract, the four-dimensional equivalent of a cube. Basic DescriptionThe tesseract, or tetracube, is a shape inhabiting four spatial dimensions. More specifically, it is the four-dimensional hypercube. The sides of the four-dimensional tesseract are three-dimensional cubes. Instead of a cube’s eight corners, or vertices, a tesseract has sixteen. If you find this hard to picture, don’t worry. As inhabitants of a three-dimensional world, we cannot fully visualize objects in four spatial dimensions. However, we can develop a general understanding of the tesseract by learning its structure, examining representations of the shape in lower dimensions, and exploring the math behind it. The tesseract is analogous to the cube in the same way that the cube is analogous to the square, the square to the line, and the line to the point. To begin thinking about the relationship between tesseracts and cubes, it is helpful to consider the relation of cubes to squares, squares to lines, and lines to points. Let’s start from the zero-dimensional point and build our way up to the four-dimensional tesseract. We form a one-dimensional line from a point by sweeping, or stretching, the point straight out in some direction. This is the first step shown in Image 1. Now imagine taking hold of this line and sweeping it out in a direction perpendicular to its length. If you sweep out a distance equal to the length of the line, you will form a two-dimensional square. This is shown in the second step of the diagram. Now we know the procedure to use to construct a tesseract from a cube. At each step so far, we took the original object and swept it out in a new direction perpendicular to every direction in the original object. We only have three spatial dimensions, and a cube inhabits all three, but try to imagine a new direction perpendicular all of the up-down, left-right, and back-forth directions of the cube. Stretch the cube out a distance equal to the length of one of its sides into this new, fourth direction and you will have swept out a tesseract. This is shown in the last panel of Image 1. In the diagram, the orange w direction is not actually perpendicular to the other three, but it is the best we can do in a three-dimensional world. In fact, even the blue z direction isn't actually perpendicular to the flat x and y directions in the diagram. We just know to interpret the fact that these directions are perpendicular in three dimensions from how they are drawn on the two-dimensional computer screen. This is an important fact to keep in mind when discussing the 4-D tesseract. We can't directly draw 4-D objects, but we can't directly draw 3-D objects either since drawings are two dimensional. So whenever we talk about a 3-D visualization of a 4-D object, we really mean a 2-D representation of a 3-D representation of the real 4-D object. Imagine folding the six squares in Image 2 into a closed, hollow object. This is another approach that we can use to construct a cube from squares. You take two-dimensional squares and use the third dimension to fold them into a cube. Each of the squares becomes a flat face of the cube. We can follow the same approach to construct a tesseract. Consider Image 3. It looks a lot like Image 2, except instead of a flat collection of six squares we have a three-dimensional collection of eight cubes. It is far more difficult to imagine than for the squares, but using the fourth dimension we could fold these eight cubes together to form a tesseract. Each cube becomes a side of the tesseract, analogous to the square faces of the cube, except three-dimensional. These eight cubic facets are oriented such that two parallel facets lie on opposing sides of the tesseract in each of the four spatial directions. Visualizing the Tesseract Visualizing four dimensions isn’t easy when you live in three and use computer screens in two. In order to better understand the tesseract and interpret images like the one at the top of the page, it is helpful to consider how inhabitants of a two-dimensional world would go about understanding objects in three dimensions. Edwin Abbott’s book Flatland presents such an analogy. Inhabitants of Flatland see and move in just two dimensions. In their world, three-dimensional shapes cannot be seen all at once, just as we cannot fully visualize a tesseract. There are two main ways an inhabitant of a flat world could perceive the structure of a three-dimensional object. We can use analogous methods to picture the tesseract. |For more animations like these, check out this site.| If a three-dimensional shape were to pass through the two-dimensional world of flatland, the inhabitants would perceive a series of its slices. For a sphere, first a point would appear, then a gradually growing circle until the sphere was half-way through, and finally a circle that shrinks until it disappears altogether. An object like a cube would be more confusing to a flatlander, since its slices look different depending on how it is tilted as it passes through a flat plane. Consider Images 4 and 5, which show a cube passing through a two-dimensional plane and the corresponding slices at two different tilts. The right-side panel of each image is all a flatlander would be able to experience. As you can see, the slices in the two images look fairly different. This doesn't seem too strange to us, since we can see the cube in the first half of each image. But for a flatlander it might be hard to tell that the slices are from the same object. Just as flatlanders can only perceive two-dimensional slices of three-dimensional objects, we are limited to visualizing three-dimensional “slices” of the four-dimensional tesseract. Images 6 and 7 show the slices of a tesseract passing through three-dimensional space at two different tilts. The perspective is closely analogous to the flatlander’s view of a passing cube in the second halves of Images 4 and 5. Similar to Image 4’s depiction of a square being sliced parallel to one of its faces as it passes through a two-dimensional plane, image 6 shows a tesseract being sliced in three dimensions parallel to one of its cubical facets. As illustrated in the animations, slicing a cube this way yields a square while slicing a tesseract this way yields a cube. In Image 7 the tesseract is being sliced corner to corner as it passes through our three-dimensional view, analogous to how the tilted cube is being sliced in Image 5 as it passes through a two-dimensional plane. Note that only the light blue parts of these animations are the actual slices. The static backgrounds are just shadows or projections of the shapes being sliced. The easiest way to think about projections is probably as shadows. There are two main types of shadows, depending on the distance between the object casting a shadow and the source of light. These correspond to the two main types of projection that can be used to visualize objects in fewer dimensions than they inhabit. For objects held close to a light source, features that are farther away from the light appear smaller in the shadow than those that are near the light source. This kind of shadow depicting objects in perspective is called a Stereographic Projection. For objects very far away from a light source, the light rays are so close to parallel that features farther from the source cease to be reduced in size in the shadow. The limiting situation, a shadow cast by the exactly parallel light of an infinitely distant source, is called an Orthographic Projection. This type of projection makes for more symmetric images, but lacks the sense of depth provided by stereographic projection. Even if a flatlander was told which type of shadow they were looking at, it would still be quite a challenge for them to mentally translate the two dimensional projection into a three-dimensional shape . They would need to be told what motion and positioning in three dimensions looks like in a projection. Consider the shadow cast by the rotating cube beneath a nearby light source in the animation on the right, an example of stereographic projection. Note that although the cube may look solid in the animation, it casts a shadow as if its sides were semi-transparent, perhaps made of red glass, with edges made of some solid like wire. What is really a cube with six square faces appears in the shadow as a small square inside of a large square with four highly distorted squares in between. As the cube rotates, the side lengths and internal angles in the projection change; the distorted sides morph into squares and back as the inner and outer squares change places. We know these distortions are not actually occurring, and that as a part of the shadow grows the corresponding cube face is just rotating closer to the light source. The important features of the cube, like the number of faces and vertices, stay true to the actual three-dimensional object even in projection. These would all be important things for an inhabitant of Flatland trying to understand a cube to know. By analogy we can use these lessons about shadows to better visualize and understand the tesseract. While a cube with a facet directed towards a nearby light in three dimensions casts a shadow of a square within a square, a tesseract with a facet directed towards a nearby light source in four dimensions casts a three-dimensional “shadow” of a cube within a cube. Instead of the cube’s facet closest to the light source projected as an outer square, we have the facet of the tesseract closest to the four-dimensional light source projected as an outer cube. Similarly, instead of the facet of a cube farthest from the light projected as an inner square, the facet of a tesseract farthest from the light is projected as an inner cube. In the shadow of a cube, the four sides appear as highly distorted squares in between these inner and outer squares. In the 3-D tesseract projection in image 9, six of the tesseract's facets appear as highly distorted cubes occupying the space between the inner and outer cubes. In the 2-D shadow of the rotating cube we saw the inner square replace the outer square as the cube rotated through a third dimension. In the animation at the top of the page we observe the inner cube, really just a facet of the tesseract farther away from the light source in the fourth dimension, unfold to replace the outer cube as the tesseract completes a half turn through the fourth dimension. These images help us to interpret stereographic projections of a tesseract from one perspective, but we could always change the tesseract's orientation so that, say, a corner were facing the light source. The projection would look quite different. To explore what the projection of a differently oriented tesseract would look like, try out the interactive feature below. To view stereographic projections like the one in image 9 or this page's main animation, check the Perspective box. This provides a greater sense of depth, albeit with greater distortion of the true dimensions of the tesseract than with the default orthographic projection setting. This applet was created by Milosz Kosmider. For more visualizations of the tesseract, see the Related Links section at the bottom of the page. A More Mathematical Explanation In the language of geometry, the tesseract is a type of regular polytope. S [...] In the language of geometry, the tesseract is a type of regular polytope. Since its sides are mutually perpendicular, it is further classified as an orthotope, the generalization of a rectangle or box to higher dimensions. More specifically, the tesseract is the four-dimensional case of a hypercube, an orthotope with all its edges of equal length. Coordinates of the Tesseract One of the most powerful mathematical methods for describing these kinds of shapes is coordinate geometry. In two-dimensional space, coordinates are represented by pairs of numbers, usually labeled x and y, with each pair specifying a point in the xy plane. Within this framework, a unit square can be defined with the coordinates of its four vertices, (0, 0), (0, 1), (1, 0), (1, 1), all the possible pairs of the numbers 0 and 1. Creating the unit cube by sweeping the unit square out in a new direction requires us to use three numbers to specify our points, usually called x, y, and z. The z coordinates for the vertices of the original unit square, now the base of the unit cube, are all 0. As a result of sweeping, we now have four more vertices in the same x and y positions but raised 1 unit in the z direction. Their z coordinates are thus all 1. In total the unit cube has eight vertices, occupying all the possible coordinate triples composed of 0s and 1s |(0, 0 , 0)||(0, 1, 0)||(1, 0, 0)||(1, 1, 0)| |(0, 0, 1)||(0, 1, 1)||(1, 0, 1)||(1, 1, 1)| We follow the same pattern when sweeping out the unit tesseract with the unit cube. Once again we add a new coordinate, so that each point is now represented as (x ,y, z, w), and once again we double the number of vertices. Half of the unit tesseract's vertices have the same coordinates as the unit cube's vertices except for 0s in the new w place, while the eight newly swept out vertices all have a w value of 1. All told, the sixteen vertices of the unit tesseract are given by the points |(0, 0 , 0, 0)||(0, 1, 0, 0)||(1, 0, 0, 0)||(1, 1, 0, 0)| |(0, 0, 1, 0)||(0, 1, 1, 0)||(1, 0, 1, 0)||(1, 1, 1, 0)| |(0, 0 , 0, 1)||(0, 1, 0, 1)||(1, 0, 0, 1)||(1, 1, 0, 1)| |(0, 0, 1, 1)||(0, 1, 1, 1)||(1, 0, 1, 1)||(1, 1, 1, 1)| Which represent all possible quadruples of the numbers 0 and 1. Within this framework of coordinate geometry, four dimensions is a natural extension of the more familiar two or three dimensions. To move up to a higher dimension, we just add a new coordinate to each of our points. Even though we can't visualize a four-dimensional point, it is perfectly legitimate and quite helpful to represent it mathematically. For example, the animations on this page were probably programmed using coordinate representations. Using Coordinates to Find the Edges of the Tesseract An edge of a hypercube is a two-dimensional line segment that connects two vertices which differ by one coordinate. To figure out how many edges a tesseract has, we can use our newly developed method of coordinate representation. But first let's apply our coordinates approach to the cube and check our answer against what we already know about the shape. Each vertex of a cube is represented by three coordinates. So how many edges meet at each vertex? Well, three coordinates per vertex means there are three different ways we can vary a single coordinate of any given vertex. So three perpendicular edges meet at each vertex. We saw earlier that the cube had eight vertices in total. That suggests 3×8 = 24 edges. However, each edge connects two vertices, so the number 24 counts each edge twice. Therefore, the cube has 3×8/2 = 12 edges. This matches what we know: four edges for the square base, four for the top, and four more connecting the base to the top. Now for the tesseract. We found earlier that the tesseract has sixteen vertices, each represented by four coordinates. Therefore there are four ways we can vary one coordinate of any given vertex, each way corresponding to a perpendicular direction. So four mutually perpendicular edges meet at each vertex. Again each edge corresponds to two of the tesseract's sixteen vertices, so the total number of edges in a tesseract is 4×16/2 = 32. Using Coordinates to Find the Facets of the Tesseract We can follow a similar approach to find how many cubic facets a tesseract has. Once again, let's first try our hand at using coordinates to find the facets of a cube and see if we get the correct result. The facets of the 3-D cube are 2-D squares. How many square facets meet at each vertex of the cube? Well, each vertex is represented by three coordinates. Each possible alteration of two of these coordinates corresponds to a square with one corner at that vertex. There are three possible choices for changing just two coordinates, so three square facets meet at each vertex of the cube. The cube has eight vertices, suggesting 3×8 = 24 square facets, but this would be counting each facet four times since all four corners of each square facet correspond to a vertex of the cube. Therefore the cube has 3×8/4 = 6 square facets: the top, the bottom, and the four sides. The facets of the Tesseract are 3-D cubes. Each vertex has four coordinates, which makes for four possible alterations of three coordinates, each corresponding to a cube with a corner at that vertex. So four cubes meet at each vertex. With the tesseract's sixteen vertices, this gives us an initial count of 4×16 = 64 cubic facets. Accounting for the fact that each cubic facet touches eight vertices with its corners, we find that the tesseract actually has 4×16/8 = 8 cubic facets. More Geometry of the Tesseract The regular progression of the properties of squares to the properties of cubes to the properties of tesseracts extends beyond these basic features. Let's now examine the extension of the cube's three-dimensional volume and diagonals to their four-dimensional equivalents in the tesseract. Lines have length, squares have area, cubes have volume, so what does a tesseract have? To answer this question, it's once again useful to look closer at the tesseract's counterparts in lower dimensions. A line segment is said to have length m if we can cover it with exactly m line segments of unit length. Likewise, the surface of a square with sides of length m can be covered with m2 unit squares, a quantity called its area, and a cube with edges of length m can be filled by exactly m3 unit cubes, defining its volume. The equivalent feature of the Tesseract is hypervolume. A tesseract with edges of length m has a four-dimensional interior which can be filled by m4 unit tesseracts. Therefore the hypervolume of a tesseract is equal to m4. Diagonalsdiagonals just as squares and cubes do, and they can be found in much the same way that we find the diagonals of these lower dimensional shapes. The unit square has two identical diagonals, which can easily be found to be of length √2 using the Pythagorean Theorem. When we form the unit cube from the unit square, these become diagonals of the square faces, and a new, longer type of diagonal cutting across the inside of the unit cube is created. One of these longer diagonals can be viewed as the hypotenuse of the right triangle shown in purple in the diagram. One leg of this right triangle is the shorter diagonal of length √2 contained in the unit cube's square base. The other leg is an edge of the unit cube extending 1 unit in the z direction. Applying the Pythagorean Theorem, this longer, internal type of diagonal is found to be of length √3. When we form the unit tesseract from the unit cube, a third, even longer type of diagonal is formed. The old diagonals are still there, in the cubic facets of the unit tesseract and their square faces, but only this third type of diagonal spans the four-dimensional interior of the unit tesseract. As before, this diagonal can be viewed as the hypotenuse of a right triangle. This time the legs of the triangle are the longest of the diagonals contained within the cubic facets of the unit tesseract and an edge of the unit tesseract extending 1 unit in the w direction. Therefore the length of the unit tesseract’s third and longest diagonal is The lengths of the diagonals of a non-unit square, cube, or tesseract are proportional to the length of a side. In other words, a tesseract with sides length m has three types of diagonals, of length m√2, m√3, and m√4 = 2m. Now that we have developed a mathematical representation of the tesseract and used it to find the basic properties of the shape, we can summarize the basic geometric features of n-cubes, or hypercubes, for n = 0 to n = 4. Note that m represents the length of an edge. |Dimension||Name||Number of Vertices||Number of Edges||Number of Facets||Content||Length of Longest Diagonal| |1||Line Segment||2||1||2 (points)||m||0| |2||Square||4||4||4 (line segments)||m2||m√2| Why It's InterestingToday the idea of more than three dimensions is fairly common. You can read about hyperspace in science fiction stories, four-dimensional space-time in physics textbooks, and a mind boggling 10 to 26 “curled up” dimensions in the writings of many modern scientists. But prior to the development of four-dimensional geometry and the popularization of the idea of dimensions by books like Abbott’s Flatland in the 1800s, the public, the physicists, and even most mathematicians did not pay much attention to the idea of four dimensions, much less 26. The door to higher dimensionality opened when people started studying strange geometric shapes like the tesseract. The tesseract is probably the best known higher dimensional shape, and as such represents a kind of symbol of the expansion of the human imagination into higher dimensions. As Many Dimensions as You Like In three dimensions there are five regular polytopes, known as the Platonic Solids, which as the name suggests have been studied since the time of the ancient greeks. One of the first mathematicians to take four dimensional geometry seriously was Ludwig Schläfli. In the mid-1800s, Schläfli figured out that in four-dimensional space there are six regular polytopes. The tesseract is one, as is an enormous shape with 600 faces, each one a three-dimensional tetrahedron. And why stop at four? We can mathematically analyze and even form visual projections of objects in five, six, or more dimensions. Instead of using triples or quadruples of coordinates, we can consider a space of arbitrarily many dimensions, consisting of all n-tuples of the form . For n > 4 dimensions, there are three regular polytopes, one of which is the n-dimensional hypercube. Basic Features of Hypercubes The same progressions from squares to cubes to tesseracts which we used to examine the geometry of the tesseract apply more generally to hypercubes. Every time we form an n+1 dimensional hypercube from an n-dimensional hypercube, we are in effect taking hold of the shape’s vertices and sweeping the whole thing out in a new direction perpendicular to all the directions in the original hypercube. The resulting hypercube has twice as many vertices as the old hypercube. Therefore, building up from a zero-dimensional point with one vertex, the number of vertices in an n-dimensional hypercube is 2n. Other properties of tesseracts that we found using analogies to lower dimensions, like the number of edges and and the length of diagonals, can be found for n-dimensional hypercubes in a similar fashion. Below is a summary of the geometric features of n-dimensional hypercubes with edges of length m. |Number of Vertices| |Number of Edges| |Number of Faces| |Length of Longest Diagonal| These are just the regular polytopes, shapes with all identical faces. Higher dimensions are home to innumerable irregular polytopes as well. Knowledge of relatively simple higher-dimensional shapes like the tesseract and how to wrap one's head around its four-dimensional structure would be essential for anyone interested in tackling those far stranger creatures. Higher Dimensions in Physics Physics and mathematics borrow from each other all the time. Sometimes the mathematicians develop an idea that the physicists find useful later, and sometimes the physicists discover a phenomenon and end up developing exciting new mathematics to describe it. When the geometry of higher dimensions first started to be studied in the 1800s, it was generally regarded as purely abstract and mathematical. But with the development of Einstein's theories of relativity in the early 1900s and more recent developments in superstring theory, physicists have been taking the idea very seriously. As mathematician Ian Stewart says, "The potential importance of high-dimensional geometry for real physical space and time has suddenly become immense". Dimensions in String Theory In modern String Theory, the fundamental components of the universe are not particles but tiny vibrating strings. Within the framework of the theory, the large variety of different types of particles we observe are composed of the same fundamental strings vibrating at distinct frequencies. While this model is quite successful in many respects, it requires our space-time universe to be either 10-dimensional or 26-dimensional, implying there are either six or 22 spatial dimensions that we don't know about. While this may sound absurd, there is no discrepancy with our everyday experience if these extra dimensions are "curled up" too small for us to detect. Imagine ants confined to walking along a thin piece of thread. For all practical means and purposes, their little world is one-dimensional. But imagine that the thread were thicker, like a large rope. Suddenly besides just going backwards and forwards the ants can move side to side along the curvature of this thick rope. According to String Theory, we are like the ants on the thread, living in a world with extra dimensions too small to be noticed. Some physicists are considering an even stranger possibility, and suggest that the extra dimensions are quite large, so large that our four-dimensional world exists inside of a higher-dimensional reality. Like inhabitants of Flatland who can't move in the third dimension, we would be prevented from moving in these hidden directions by our laws of physics. The mathematics of higher dimensions has applications beyond just spatial and temporal dimensions. An n-dimensional space consists of a bunch of points, each of which is list of n numbers. We don’t have to think of these numbers as coordinates for positions in physical space. They could represent any variables. Consider a bicycle. We can describe the state of all the bicycle’s crucial components with six numbers: the angle between the handlebars and the frame, the angular positions of each of the two wheels, the positioning of the pedals’ axle, and the angular positions of each of the two pedals. A bicycle is of course a three-dimensional object. But we can describe any configuration of the bicycle with six numbers, numbers we can view as generalized coordinates, meaning that the state of the bicycle exists in an abstract, six-dimensional space. To get the hang of riding a bicycle, you need learn how these six numbers interact, not to mention the extra variables for motion and interaction with the road. This can be thought of as learning the six-dimensional geometry of “bicycle space”. This way of visualizing the state of a system in an imaginary space of as many dimensions as you have variables turns out to be quite useful, and is used by mathematicians, physicists, and even economists and biologists. For example, virologists find it useful to think of specific viruses as “points” in a space of DNA sequences. Each virus has a DNA sequence composed of a series of smaller molecular components called bases, represented by the four letters A, C, G, and T. The bases are the coordinates, and the DNA sequences are the points. Each coordinate is limited to being one of the four bases, just like how the coordinates for the vertices of a unit square, cube, or tesseract are limited to being either 0 or 1. The numbers 0 and 1 are like DNA bases, which makes unit hypercubes like DNA if it had only two bases. This allows scientists to think of the possible viral DNA sequences as vertices of a very high-dimensional hypercube-like object, a sort of hypercube with its interior filled with smaller hypercubes. Each vertex corresponds to a specific virus, and since edges connect two vertices which differ by exactly one position, each edge can be thought of as a point mutation changing one base in a virus’ DNA sequence. Because this hypercube-like shape has such a high dimension, each vertex connects to quite a lot of other vertices, meaning that the virus has the potential to mutate in an enormous number of different ways. Using geometry of higher dimensional objects, virologists can understand how quickly the variation in possible mutations grows with longer sequences of viral DNA. - There are currently no teaching materials for this page. Add teaching materials. - A gradual explanation of the tesseract with lots of interactive features: http://www.learner.org/courses/mathilluminated/interactives/dimension/ - A series of videos explaining higher dimensional shapes with a focus on visualizations using a form of stereographic projection: http://www.dimensions-math.org/ - More great animations: http://www.math.union.edu/~dpvc/math/4D/welcome.html - Carl Sagan discussing Flatland and hypercubes on an episode of Cosmos:http://www.youtube.com/watch?v=KIadtFJYWhw - An online version of Edwin Abbott’s classic Flatland: http://www.ibiblio.org/eldritch/eaa/FL.HTM - ↑ 1.0 1.1 Michio, K. (1994). Hyperspace: a scientific odyssey through parallel universes, time warps, and the 10th dimension. New York: Oxford University Press. - ↑ 2.0 2.1 Rucker, R. (1984). The Fourth Dimension: toward a geometry of higher reality. Boston: Houghton Mifflin Company. - ↑ Rehmeyer, J. (2008). "Seeing in Four Dimensions". Science News. - ↑ 4.0 4.1 4.2 Stewart, I. (2002). "The Annotated Flatland". Cambridge: Perseus Publishing Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
http://mathforum.org/mathimages/index.php?title=Tesseract&redirect=no
13
68
The derivative as the slope of a graph is standard fare, and it’s important for visualizing calculus. The Derivative as Slope Let’s look at the graph of . If we take a point on this graph, for example (2,4), the y-value is the square of the x-value. If look at a nearby point, those values have changed by and respectively. We can visualize those changes like this: and are supposed to represent tiny changes, so we better bring the points in close to each other and zoom in. Any reasonable curve looks like a straight line when you zoom in on it enough, including this one. As far as these nearby points are concerned, is a line, and they are on it. That line is called the tangent line. Here it is: The value of is the derivative of y with respect to x, but in this context it is also called the slope of the tangent line. So, the derivative of a function at a certain point is the slope of the tangent line that point. If we zoom back out again, eventually the graph of no longer looks like a line; we can see its curvature. The tangent line tracks the graph for a while, but eventually diverges. The red line shown below is the tangent line to the parabola. The derivative of with respect to x is 2x, so the slope of this tangent line through (2,4) is . To find the equation for tangent line itself, we choose the line with the specified slope that goes through the point. That would be . Elementary geometry tells us that the tangent to a circle is perpendicular to the radius. Let’s combine this fact with some calculus. If we have a circle at the origin, the slope to a point (x,y) on the circle is y/x. The circle is given by . Applying to both sides gives (because ). This simplifies to Which is the slope of the tangent line. Since this is perpendicular to a line of slope y/x, we see that perpendicular lines have negative-reciprocal slopes, a fact familiar from algebra. If you want to estimate the square root of a number , a good way is take a guess , then average with . For example, to find the square root of 37, guess that it’s 6, then take the average of 6 and 37/6. The actual answer is about 6.0828. It’s close. To get closer, iterate. The actual answer, with more accuracy, is 6.08276253. So we’ve got 7 decimal places of accuracy after two iterations of guessing. Calculus shows us where this comes from. We are estimating . That is a zero of . So we plot (here, ). We don’t know where the zero is, but we know that is near the zero. So we draw the tangent line to the graph at . This tangent is . The tangent line tracks the parabola quite closely for the very short from the point to wherever the zero is. So closely that we can’t even see the difference there. Zoom in near the point (6,-1). Now we see that the tangent line is a very good approximation to the parabola near the zero, so we can approximate the zero using the zero of the tangent line instead of the zero of the parabola. The zero of the tangent line is given by This is our first new guess for the zero of the parabola. It’s off, but only by a tiny bit, as this even-more-zoomed picture shows. We’ve zoomed in so closely that the original point (6,-1) is no longer visible. From here, we can iterate the process by drawing a new tangent line like this: We’ve zoomed in even closer. The red line is the tangent that gave us our first improved guess of 6.0833. Next, we drew a new tangent (purple) to the graph (blue) at the location of the improved guess to get a second improved guess, which is again so close we can’t even see the difference on this picture, despite zooming in three times already. This general idea of estimating the zeroes of a function by guessing, drawing tangents, and finding a zero of the tangent, is called Newton’s method. - Graph and find the places where the tangent line slices through the graph, rather than lying completely above or below it near the point of tangency. What is unique about the derivative at these points? (Answer: the derivative is at a local minimum or maximum (i.e. the graph is steepest) when the tangent line slices through) - Find the slope of the tangent line to a point (x,y) on the ellipse via calculus. Find it again by starting with the unit circle , for which you already know the slope of the tangent, and making appropriate substitutions for and . (Answer: ) - In this post, we found that is tangent to at . Confirm this without calculus by noting that there are many lines through (2,4), all with different slopes. The thing that singles out the tangent line is that it only intersects the parabola once. Any line through (2,4) with a shallower slope than the tanget will intersect the parabola at (2,4), but intersect again somewhere off to the left. Any line with a steeper slope will have a second intersection to the right. Use algebra to write down the equation for a line passing through (2,4) with unknown slope, and set its y-value equal to x^2 to find the intersections with the parabola. What slope does the line need to have so that there is only one such intersection? - Do the previous exercise over for a circle (i.e. use algebra to find the tangent line to a circle) - For any point outside a circle, there are two tangents to the circle that pass through the point. When are these tangents perpendicular? (Answer: When the point is on a circle with the same center and radius as much) - Newton’s method of estimating zeroes gave the same numerical answer for the zero of as the algorithm for estimating square roots gave for . Show that this is always the case (i.e. perform Newton’s method on with a tangent at some point , and show that the new guess generated is the same as that given in the algorithm). - Use Newton’s method to estimate to four decimal places (Answer: 3.0366).
http://arcsecond.wordpress.com/tag/tangent-lines/
13
67
An analog-to-digital converter (abbreviated ADC, A/D or A to D) is an electronic integrated circuit, which converts continuous signals to discrete digital numbers. The reverse operation is performed by a digital-to-analog converter (DAC). Typically, an ADC is an electronic device that converts an input analog voltage (or current) to a digital number. The digital output may be using different coding schemes, such as binary, Gray code or two's complement binary. However, some non-electronic or only partially electronic devices, such as rotary encoders, can also be considered ADCs. The resolution of the converter indicates the number of discrete values it can produce over the range of analog values. The values are usually stored electronically in binary form, so the resolution is usually expressed in bits . In consequence, the number of discrete values available, or "levels", is usually a power of two. For example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels, since . The values can represent the ranges from 0 to 255 (i.e. unsigned integer) or from -128 to 127 (i.e. signed integer), depending on the application. Resolution can also be defined electrically, and expressed in volts. The voltage resolution of an ADC is equal to its overall voltage measurement range divided by the number of discrete intervals as in the formula: - Q is resolution in volts per step (volts per output code), - EFSR is the full scale voltage range = and - M is the ADC's resolution in bits. The number of intervals is given by the number of available levels (output codes), which is: Some examples may help: - Example 1 - Full scale measurement range = 0 to 10 volts - ADC resolution is 12 bits: 212 = 4096 quantization levels (codes) - ADC voltage resolution is: (10V - 0V) / 4096 codes = 10V / 4096 codes 0.00244 volts/code 2.44 mV/code - Example 2 - Full scale measurement range = -10 to +10 volts - ADC resolution is 14 bits: 214 = 16384 quantization levels (codes) - ADC voltage resolution is: (10V - (-10V)) / 16384 codes = 20V / 16384 codes 0.00122 volts/code 1.22 mV/code - Example 3 - Full scale measurement range = 0 to 8 volts - ADC resolution is 3 bits: 23 = 8 quantization levels (codes) - ADC voltage resolution is: (8 V − 0 V)/8 codes = 8 V/8 codes = 1 volts/code = 1000 mV/code In practice, the smallest output code ("0" in an unsigned system) represents a voltage range which is 0.5X (half-wide) of the ADC voltage resolution (Q) while the largest output code represents a voltage range which is 1.5X (50% wider) of the ADC voltage resolution. The other N − 2 codes are all equal in width and represent the ADC voltage resolution (Q) calculated above. Doing this centers the code on an input voltage that represents the Mth division of the input voltage range. For example, in Example 3, with the 3-bit ADC spanning an 8 V range, each of the N divisions would represent 1 V, except the 1st ("0" code) which is 0.5 V wide, and the last ("7" code) which is 1.5 V wide. Doing this the "1" code spans a voltage range from 0.5 to 1.5 V, the "2" code spans a voltage range from 1.5 to 2.5 V, etc. Thus, if the input signal is at 3/8ths of the full-scale voltage, then the ADC outputs the "3" code, and will do so as long as the voltage stays within the range of 2.5/8ths and 3.5/8ths. This practice is called "Mid-Tread" operation. This type of ADC can be modeled mathematically as: The exception to this convention seems to be the Microchip PIC processor, where all M steps are equal width. This practice is called "Mid-Rise with Offset" operation. In practice, the useful resolution of the converter is limited by the signal-to-noise ratio of the signal in question. If there is too much noise present in the analog input, it will be impossible to accurately resolve beyond a certain number of bits of resolution, the "effective number of bits" (ENOB). If a preamplifier has been used prior to A/D conversion, the noise introduced by the amplifier is an important contributing factor towards the overall SNR. While the ADC will produce a result, the result is not accurate, since its lower bits are simply measuring noise. The signal-to-noise ratio should be around 6 dB per bit of resolution required. Most ADCs are of a type known as linear , although analog-to-digital conversion is an inherently non-linear process (since the mapping of a continuous space to a discrete space is a piecewise-constant and therefore non-linear operation). The term linear as used here means that the range of the input values that map to each output value has a linear relationship with the output value, i.e., that the output value k is used for the range of input values from - m(k + b) - m(k + 1 + b), where m and b are constants. Here b is typically 0 or −0.5. When b = 0, the ADC is referred to as mid-rise, and when b = −0.5 it is referred to as mid-tread. If the probability density function of a signal being digitized is uniform , then the signal-to-noise ratio relative to the quantization noise is the best possible. Because of this, it's usual to pass the signal through its cumulative distribution function (CDF) before the quantization. This is good because the regions that are more important get quantized with a better resolution. In the dequantization process, the inverse CDF is needed. This is the same principle behind the companders used in some tape-recorders and other communication systems, and is related to entropy maximization. (Never confuse companders with compressors!) For example, a voice signal has a Laplacian distribution. This means that the region around the lowest levels, near 0, carries more information than the regions with higher amplitudes. Because of this, logarithmic ADCs are very common in voice communication systems to increase the dynamic range of the representable values while retaining fine-granular fidelity in the low-amplitude region. An eight-bit a-law or the μ-law logarithmic ADC covers the wide dynamic range and has a high resolution in the critical low-amplitude region, that would otherwise require a 12-bit linear ADC. An ADC has several sources of errors. Quantization error and (assuming the ADC is intended to be linear) non-linearity is intrinsic to any analog-to-digital conversion. There is also a so-called aperture error which is due to a clock jitter and is revealed when digitizing a time-variant signal (not a constant value). These errors are measured in a unit called the LSB, which is an abbreviation for least significant bit. In the above example of an eight-bit ADC, an error of one LSB is 1/256 of the full signal range, or about 0.4%. Quantization error is due to the finite resolution of the ADC, and is an unavoidable imperfection in all types of ADC. The magnitude of the quantization error at the sampling instant is between zero and half of one LSB. In the general case, the original signal is much larger than one LSB. When this happens, the quantization error is not correlated with the signal, and has a uniform distribution. Its RMS value is the standard deviation of this distribution, given by . In the eight-bit ADC example, this represents 0.113% of the full signal range. At lower levels the quantizing error becomes dependent of the input signal, resulting in distortion. This distortion is created after the anti-aliasing filter, and if these distortions are above 1/2 the sample rate they will alias back into the audio band. In order to make the quantizing error independent of the input signal, noise with an amplitude of 1 quantization step is added to the signal. This slightly reduces signal to noise ratio, but completely eliminates the distortion. It is known as dither. All ADCs suffer from non-linearity errors caused by their physical imperfections, causing their output to deviate from a linear function (or some other function, in the case of a deliberately non-linear ADC) of their input. These errors can sometimes be mitigated by calibration, or prevented by testing. Important parameters for linearity are integral non-linearity (INL) and differential non-linearity (DNL). Imagine that we are digitizing a sine wave . Provided that the actual sampling time uncertainty due to the clock jitter , the error caused by this phenomenon can be estimated as One can see that the error is relatively small at low frequencies, but can become significant at high frequencies. This effect can be ignored if it is relatively small as compared with quantizing error. Jitter requirements can be calculated using the following formula: , where q is a number of ADC bits. | ADCresolutionin bit || input frequency | 1 Hz || 44.1 kHz || 192 kHz || 1 MHz || 10 MHz || 100 MHz || 1 GHz | || 1243 µs || 28.2 ns || 6.48 ns || 1.24 ns || 124 ps || 12.4 ps || 1.24 ps || 311 µs || 7.05 ns || 1.62 ns || 311 ps || 31.1 ps || 3.11 ps || 0.31 ps || 77.7 µs || 1.76 ns || 405 ps || 77.7 ps || 7.77 ps || 0.78 ps || 0.08 ps || 19.4 µs || 441 ps || 101 ps || 19.4 ps || 1.94 ps || 0.19 ps || 0.02 ps || 4.86 µs || 110 ps || 25.3 ps || 4.86 ps || 0.49 ps || 0.05 ps || 1.21 µs || 27.5 ps || 6.32 ps || 1.21 ps || 0.12 ps || 304 ns || 6.88 ps || 1.58 ps || 0.16 ps || 19.0 ns || 0.43 ps || 0.10 ps || 74.1 ps This table shows, for example, that it is not worth using a precise 24-bit ADC for sound recording if we don't have an ultra low jitter clock. One should consider taking this phenomenon into account before choosing an ADC. The analog signal is continuous and it is necessary to convert this to a flow of digital values. It is therefore required to define the rate at which new digital values are sampled from the analog signal. The rate of new values is called the sampling rate or sampling frequency of the converter. A continuously varying bandlimited signal can be sampled (that is, the signal values at intervals of time T, the sampling time, are measured and stored) and then the original signal can be exactly reproduced from the discrete-time values by an interpolation formula. The accuracy is limited by quantization error. However, this faithful reproduction is only possible if the sampling rate is higher than twice the highest frequency of the signal. This is essentially what is embodied in the Shannon-Nyquist sampling theorem. Since a practical ADC cannot make an instantaneous conversion, the input value must necessarily be held constant during the time that the converter performs a conversion (called the conversion time). An input circuit called a sample and hold performs this task—in most cases by using a capacitor to store the analogue voltage at the input, and using an electronic switch or gate to disconnect the capacitor from the input. Many ADC integrated circuits include the sample and hold subsystem internally. All ADCs work by sampling their input at discrete intervals of time. Their output is therefore an incomplete picture of the behaviour of the input. There is no way of knowing, by looking at the output, what the input was doing between one sampling instant and the next. If the input is known to be changing slowly compared to the sampling rate, then it can be assumed that the value of the signal between two sample instants was somewhere between the two sampled values. If, however, the input signal is changing fast compared to the sample rate, then this assumption is not valid. If the digital values produced by the ADC are, at some later stage in the system, converted back to analog values by a digital to analog converter or DAC, it is desirable that the output of the DAC be a faithful representation of the original signal. If the input signal is changing much faster than the sample rate, then this will not be the case, and spurious signals called aliases will be produced at the output of the DAC. The frequency of the aliased signal is the difference between the signal frequency and the sampling rate. For example, a 2 kHz sinewave being sampled at 1.5 kHz would be reconstructed as a 500 Hz sinewave. This problem is called aliasing. To avoid aliasing, the input to an ADC must be low-pass filtered to remove frequencies above half the sampling rate. This filter is called an anti-aliasing filter, and is essential for a practical ADC system that is applied to analog signals with higher frequency content. Although aliasing in most systems is unwanted, it should also be noted that it can be exploited to provide simultaneous down-mixing of a band-limited high frequency signal (see frequency mixer). In A to D converters, performance can usually be improved using dither . This is a very small amount of random noise (white noise ) which is added to the input before conversion. Its amplitude is set to be about half of the least significant bit. Its effect is to cause the state of the LSB to randomly oscillate between 0 and 1 in the presence of very low levels of input, rather than sticking at a fixed value. Rather than the signal simply getting cut off altogether at this low level (which is only being quantized to a resolution of 1 bit), it extends the effective range of signals that the A to D converter can convert, at the expense of a slight increase in noise - effectively the quantization error is diffused across a series of noise values which is far less objectionable than a hard cutoff. The result is an accurate representation of the signal over time. A suitable filter at the output of the system can thus recover this small signal variation. An audio signal of very low level (with respect to the bit depth of the ADC) sampled without dither sounds extremely distorted and unpleasant. Without dither the low level always yields a '1' from the A to D. With dithering, the true level of the audio is still recorded as a series of values over time, rather than a series of separate bits at one instant in time. A virtually identical process, also called dither or dithering, is often used when quantizing photographic images to a fewer number of bits per pixel—the image becomes noisier but to the eye looks far more realistic than the quantized image, which otherwise becomes banded. This analogous process may help to visualize the effect of dither on an analogue audio signal that is converted to digital. Dithering is also used in integrating systems such as electricity meters. Since the values are added together, the dithering produces results that are more exact than the LSB of the analog-to-digital converter. Note that dither can only increase the resolution of a sampler, it cannot improve the linearity, and thus accuracy does not necessarily improve. Usually, signals are sampled at the minimum rate required, for economy, with the result that the quantization noise introduced is white noise spread over the whole pass band of the converter. If a signal is sampled at a rate much higher than the Nyquist frequency and then digitally filtered to limit it to the signal bandwidth then there are 3 main advantages: - digital filters can have better properties (sharper rolloff, phase) than analogue filters, so a sharper anti-aliasing filter can be realised and then the signal can be downsampled giving a better result - a 20 bit ADC can be made to act as a 24 bit ADC with 256x oversampling - the signal-to-noise ratio due to quantization noise will be higher than if the whole available band had been used. With this technique, it is possible to obtain an effective resolution larger than that provided by the converter alone These are the most common ways of implementing an electronic ADC: - A direct conversion ADC or flash ADC has a bank of comparators, each firing for their decoded voltage range. The comparator bank feeds a logic circuit that generates a code for each voltage range. Direct conversion is very fast, but usually has only 8 bits of resolution (255 comparators - since the number of comparators required is 2n - 1) or fewer, as it needs a large, expensive circuit. ADCs of this type have a large die size, a high input capacitance, and are prone to produce glitches on the output (by outputting an out-of-sequence code). Scaling to newer submicron technologies does not help as the device mismatch is the dominant design limitation. They are often used for video, wideband communications or other fast signals in optical storage. - A successive-approximation ADC uses a comparator to reject ranges of voltages, eventually settling on a final voltage range. Successive approximation works by constantly comparing the input voltage to the output of an internal digital to analog converter (DAC, fed by the current value of the approximation) until the best approximation is achieved. At each step in this process, a binary value of the approximation is stored in a successive approximation register (SAR). The SAR uses a reference voltage (which is the largest signal the ADC is to convert) for comparisons. For example if the input voltage is 60 V and the reference voltage is 100 V, in the 1st clock cycle, 60 V is compared to 50 V (the reference, divided by two. This is the voltage at the output of the internal DAC when the input is a '1' followed by zeros), and the voltage from the comparator is positive (or '1') (because 60 V is greater than 50 V). At this point the first binary digit (MSB) is set to a '1'. In the 2nd clock cycle the input voltage is compared to 75 V (being halfway between 100 and 50 V: This is the output of the internal DAC when its input is '11' followed by zeros) because 60 V is less than 75 V, the comparator output is now negative (or '0'). The second binary digit is therefore set to a '0'. In the 3rd clock cycle, the input voltage is compared with 62.5 V (halfway between 50 V and 75 V: This is the output of the internal DAC when its input is '101' followed by zeros). The output of the comparator is negative or '0' (because 60 V is less than 62.5 V) so the third binary digit is set to a 0. The fourth clock cycle similarly results in the fourth digit being a '1' (60 V is greater than 56.25 V, the DAC output for '1001' followed by zeros). The result of this would be in the binary form 1001. This is also called bit-weighting conversion, and is similar to a binary search. The analogue value is rounded to the nearest binary value below, meaning this converter type is mid-rise (see above). Because the approximations are successive (not simultaneous), the conversion takes one clock-cycle for each bit of resolution desired. The clock frequency must be equal to the sampling frequency multiplied by the number of bits of resolution desired. For example, to sample audio at 44.1 kHz with 32 bit resolution, a clock frequency of over 1.4 MHz would be required. ADCs of this type have good resolutions and quite wide ranges. They are more complex than some other designs. - A ramp-compare ADC (also called integrating, dual-slope or multi-slope ADC) produces a saw-tooth signal that ramps up, then quickly falls to zero. When the ramp starts, a timer starts counting. When the ramp voltage matches the input, a comparator fires, and the timer's value is recorded. Timed ramp converters require the least number of transistors. The ramp time is sensitive to temperature because the circuit generating the ramp is often just some simple oscillator. There are two solutions: use a clocked counter driving a DAC and then use the comparator to preserve the counter's value, or calibrate the timed ramp. A special advantage of the ramp-compare system is that comparing a second signal just requires another comparator, and another register to store the voltage value. A very simple (non-linear) ramp-converter can be implemented with a microcontroller and one resistor and capacitor. Vice versa a filled capacitor can be taken from an integrator, time-to-amplitude converter, phase detector, sample and hold circuit, or peak and hold circuit and discharged. This has the advantage that a slow comparator cannot be disturbed by fast input changes. - A delta-encoded ADC has an up-down counter that feeds a digital to analog converter (DAC). The input signal and the DAC both go to a comparator. The comparator controls the counter. The circuit uses negative feedback from the comparator to adjust the counter until the DAC's output is close enough to the input signal. The number is read from the counter. Delta converters have very wide ranges, and high resolution, but the conversion time is dependent on the input signal level, though it will always have a guaranteed worst-case. Delta converters are often very good choices to read real-world signals. Most signals from physical systems do not change abruptly. Some converters combine the delta and successive approximation approaches; this works especially well when high frequencies are known to be small in magnitude. - A pipeline ADC (also called subranging quantizer) uses two or more steps of subranging. First, a coarse conversion is done. In a second step, the difference to the input signal is determined with a digital to analog converter (DAC). This difference is then converted finer, and the results are combined in a last step. This can be considered a refinement of the successive approximation ADC wherein the feedback reference signal consists of the interim conversion of a whole range of bits (for example, four bits) rather than just the next-most-significant bit. By combining the merits of the successive approximation and flash ADCs this type is fast, has a high resolution, and only requires a small die size. - A Sigma-Delta ADC (also known as a Delta-Sigma ADC) oversamples the desired signal by a large factor and filters the desired signal band. Generally a smaller number of bits than required are converted using a Flash ADC after the Filter. The resulting signal, along with the error generated by the discrete levels of the Flash, is fed back and subtracted from the input to the filter. This negative feedback has the effect of noise shaping the error due to the Flash so that it does not appear in the desired signal frequencies. A digital filter (decimation filter) follows the ADC which reduces the sampling rate, filters off unwanted noise signal and increases the resolution of the output. (sigma-delta modulation, also called delta-sigma modulation) Nonelectronic ADCs usually use some scheme similar to one of the above. Commercial analog-to-digital converters These are usually integrated circuits Most converters sample with 6 to 24 bits of resolution, and produce fewer than 1 megasample per second. It is rare to get more than 24 bits of resolution because of thermal noise generated by passive components such as resistors. For audio applications and in room temperatures, such noise is usually a little less than 1 μV (microvolt) of white noise. If the Most Significant Bit corresponds to a standard 2 volts of output signal, this translates to a noise-limited performance that is less than 20~21 bits, and obviates the need for any dithering. Mega- and gigasample converters are available, though (Feb 2002). Megasample converters are required in digital video cameras, video capture cards, and TV tuner cards to convert full-speed analog video to digital video files. Commercial converters usually have ±0.5 to ±1.5 LSB error in their output. In many cases the most expensive part of an integrated circuit is the pins, because they make the package larger, and each pin has to be connected to the integrated circuit's silicon. To save pins, it's common for slow ADCs to send their data one bit at a time over a serial interface to the computer, with the next bit coming out when a clock signal changes state, say from zero to 5V. This saves quite a few pins on the ADC package, and in many cases, does not make the overall design any more complex. (Even microprocessors which use memory-mapped IO only need a few bits of a port to implement a serial bus to an ADC.) Commercial ADCs often have several inputs that feed the same converter, usually through an analog multiplexer. Different models of ADC may include sample and hold circuits, instrumentation amplifiers or differential inputs, where the quantity measured is the difference between two voltages. Application to music recording ADCs are integral to current music reproduction technology. Since much music production is done on computers, when an analog recording is used, an ADC is needed to create the PCM data stream that goes onto a compact disc. The current crop of AD converters utilized in music can sample at rates up to 192 kilohertz. Many people in the business consider this an overkill and pure marketing hype, due to the Nyquist-Shannon sampling theorem. Simply put, they say the analog waveform does not have enough information in it to necessitate such high sampling rates, and typical recording techniques for high-fidelity audio are usually sampled at either 44.1 kHz (the standard for CD) or 48 kHz (commonly used for radio/TV broadcast applications). However, this kind of bandwidth headroom allows the use of cheaper or faster anti-aliasing filters of less severe filtering slopes. The proponents of oversampling assert that such shallower anti-aliasing filters produce less deleterious effects on sound quality, exactly because of their gentler slopes. Others prefer entirely filterless AD conversion, arguing that aliasing is less detrimental to sound perception than pre-conversion brickwall filtering. Considerable literature exists on these matters, but commercial considerations often play a significant role. Most high-profile recording studios record in 24-bit/192-176.4 kHz PCM or in DSD formats, and then downsample or decimate the signal for Red-Book CD production. AD converters are used virtually everywhere where an analog signal has to be processed, stored, or transported in digital form. Fast video ADCs are used, for example, in TV tuner cards . Slow on-chip 8, 10, 12, or 16 bit ADCs are common in microcontrollers . Very fast ADCs are needed in digital oscilloscopes , and are crucial for new applications like software defined radio ADC's dynamic range is also important. - "Understanding analog to digital converter specifications" article by Len Staller 2005-02-24. - S. Norsworthy, R. Schreier, G. Temes, Delta-Sigma Data Converters. ISBN 0-7803-1045-4. - Mingliang Liu, Demystifying Switched-Capacitor Circuits. ISBN 0-7506-7907-7. - Behzad Razavi, Principles of Data Conversion System Design. ISBN 0-7803-1093-4. - David Johns, Ken Martin, Analog Integrated Circuit Design. ISBN 0-471-14448-7. - Phillip E. Allen, Douglas R. Holberg, CMOS Analog Circuit Design. ISBN 0-19-511644-5. - Walt Kester, "The Data Conversion Handbook". ISBN 0-7506-7841-0.
http://www.reference.com/browse/analog
13
94
|A force, F, pulls a plate a velocity across a viscous| |fluid. The lower plate is fixed.<| A flat plate is pulled with a force F across the top of fluid which sits on a stationary flat plate. The top plate moves at velocity , and the separation of the plates is D. Experiments show that the force required to pull the plate is proportional to the velocity with which the plate moves, and to the area of the plate. It is inversely proportional to the separation of the plates, D. To a large extent, the force is independent of the material used for the moving plates. However, it does depend on the nature of the viscous fluid, and fluids are characterized individually by their viscous effect. |The force by the fluid on the upper plate is directed to the left.| We will express the relation between force and velocity in terms of the force per unit area on the upper plate by the fluid. The experimental result is The constant of proportionality, , is called the viscosity. is different for each different fluid. It also depends on fluid temperature, typically becoming smaller at higher temperature for liquids, but larger at high temperature for gases. |A table of some typical viscosites at room temperature.| The units of viscosity are In practice the unit used is the poise (named for the French physicist J.L. Poiseuille): (The poise is the natural viscosity unit in the cgs system of units.) STEADY STATE VELOCITY FIELD FOR A VISCOUS FLUID BETWEEN MOVING PLATES When the experiment described above is first begun, the velocity field is time dependent. If the field evolves to one in which the net force on a unit volume is zero, this field will not change. This condition is called the steady state condition. (It is not equilibrium, because the fluid is in motion.) We approach this by considering a variation of the experiment above. First we imagine changing frames of reference, to one which at rest on plate A, rather than plate B. From there, we see the plate B moving left at a constant velocity . Using the rule for viscous force, we conclude that the force on plate B by the fluid is the same magnitude as the fluid force on plate A, but directed to the right. (For relative motion with constant velocity, observers agree concerning the accelerations of and forces on objects.) |The viscous fluid exerts a net force of zero on plate B.| In order to have done that experiment, we had to exert a force to the left on plate B, to "hold it in place." We can imagine another experiment, in which a viscous force holds plate B in place. We build it as shown in the figure. Plate C moves to the left and the fluid below exerts a force to the left on plate B. Since the distances between plates are the same, and both velocities have the same magnitude, the net force on plate B is zero. |The viscous fluid exerts a net force of zero on plate B,| |as viewed in a frame of reference with C stationary.| Now we shift coordinate systems again, to one at rest on plate C. Now the motion looks to us as shown in the figure. The forces are unchanged, so the net force on plate B remains zero. We replace plate B with a slab of fluid. Since the forces acting across the surface are independent of the content of the surface, the net force on our slab of fluid remains zero. The velocity of the fluid remains constant. We have found the steady state velocity half way between plates A and C. |Velocity field for a viscous fluid between two plates, one| |of which is stationary.| We can repeat this thought experiment. At each stage, we can show that half way between a moving slab of fluid and the bottom plate (at rest), the velocity is half that of the upper slab. This just says that the velocity field is proportional to the height above the bottom slab. The constant of proportionality is chosen so that the fluid has the correct velocity on at the top plate. (There, as on the bottom, the fluid velocity equals the plate velocity.) We can write MECHANICAL ENERGY IS NOT CONSERVED IN VISCOUS FLOW The kinetic energy of every bit of fluid is constant in the steady state situation pictured above. Yet work is being done by the plate at a rate where is the force by the plate on the fluid. This work is all dissipated, and must appear as heat. Within the fluid, we may visualize surfaces of constant velocity. In this case they are planes parallel to the plates, as shown above. We may think of layers of fluid, defined by the condition that each bit of fluid in the layer has the same velocity. Each layer contains a family of streamlines. The rate at which work is done by the bottom surface of a layer is . is the force by the bottom on the fluid below, and is the velocity of the layer.) The rate at which work is done on the next layer below is where is the velocity of the layer below. In the figure above, , and we conclude that energy is dissipated in the interaction between the two layers. LOCAL EXPRESSION FOR THE VISCOUS FORCE |The force by a viscous fluid on the moving upper plate.| The viscous force by the fluid on a unit area of the upper plate was represented as To calculate this force we need to know the separation of the two plates, D. This makes it appear that a distant object is the direct source of the force on the top plate, when in fact it is the fluid just below the plate which exerts the force. Now that we know that the velocity is simply proportional to the height above the lower plate, we know that and we can write This is an expression that we can apply even if we do not know the distance D, as long as we know the local spatial derivative of the velocity. We can use it for a barge being pulled across a lake whose depth we do not know. We have a local expression for the viscous force. VISCOUS FORCE ON A UNIT VOLUME IN LAMINAR FLOW |The force on a volume of fluid by neighboring fluid above and| |below is determined by the velocity derivatives above and below| We choose our sample volume so that its shape conforms to the shape of the layers in laminar flow. An example for plane laminar flow is shown in the figure. The upper surface moves slightly faster than the lower. We imagine that the volume moves at the velocity of its center. In the figure, the force by the fluid above on our volume is where is the area of the top of our volume. Note the positive sign, because the faster upper fluid is pulling our volume towards the right. The net viscous force is where is the area of the top (and bottom) of our volume. Note that the fluid on the bottom moves slower, and pulls to the left. If the velocity derivative is the same, top and bottom, the net force is zero. This is the steady state situation if the bottom plate is fixed and the top plate moves at constant velocity, making the derivative of the velocity constant. To find the force per unit volume we assign a height to the volume in the picture. Then the volume of our sample is , and we can write the force per unit volume: In the limit as , this becomes If the density is constant, this result can be extended to three dimensions: Note that is a scalar operator. The vector character of the expression is carried by the velocity vector. Euler's equation for an incompressible viscous fluid becomes When the viscosity is included, this is known as the Navier-Stokes equation for an incompressible fluid. Return to index. Return to survey. Go forward to Poiseuille's Law.
http://www.millersville.edu/~jdooley/macro/derive/viscous/visplat/visplat.htm
13
100
Addition, subtraction and multiplication are defined, division needn't be. (A, +, . ) is a ring when "addition" (+) and "multiplication" (.) are well-defined internal operations over the set A with the following properties: - (A,+) is a commutative group. The neutral element is 0. - Multiplication is an assocative (not necessarily commutative) operation distributive over addition: x.(y+z) = x.y + x.z and (x+y).z = x.z + y.z allow the omission of the "dot" symbol (.). The concept of a ring was introduced by The name (Zahlring in German) was coined by David Hilbert in 1897. Axiomatic definitions were given by Adolf Fraenkel in 1914. Some additional properties of a ring are indicated by specific terms: - Commutative Ring : Multiplication is commutative: x.y = y.x - Unital Ring : There's a multiplicative neutral element: 1.x = x.1 = x - Integral Domain : The product of two nonzero elements is nonzero. - Division Ring : Any nonzero element has a multiplicative inverse. A field is normally defined as a commutative division ring (a division ring where multiplication is commutative) unless otherwise specified. We consider as synonymous the terms noncommutative field, noncommutative division ring and skew field (some authors allow commutativity in a skew field). In French, a field (corps) is a division ring, commutative or not. For the record, a semiring has fewer properties than a ring, as it's built on an additive monoid instead of an additive group. This means that a semiring does contain a zero element (neutral for addition) but subtraction is not always defined. In a semiring, zero is postulated to be multiplicatively absorbent (0.x = x.0 = 0). (2006-02-15) Divisors of Zero (or zero divisor ) In some rings, the product of two nonzero elements can be zero. In a ring, by definition, a nonzero element d is said to be a divisor of a given element a when there is a nonzero element x such that: d x = a In particular (with a = 0 ) a divisor of zero is a nonzero element whose product by some nonzero element is equal to zero. There are no such things in integral domains (including division rings, fields and skew fields). The zero element itself is not considered a divisor of zero. An idempotent element is a solution of the equation: x 2 = x Every ring has at least one idempotent element (namely 0) and every unital ring has another trivial one (namely 1). If a unital ring has other idempotent elements (said to be nontrivial) then it has at least one divisor of zero because, in a unital ring, the previous equation reduces to the following zero product of two factors (neither of which is zero when x is neither 0 nor 1). x (1-x) = 0 If some nth power of an element is zero, that element is said to be nilpotent. Clearly, a nonzero nilpotent element is a divisor of zero. x n = 0 The simplest example of a nilpotent element is "2"in the ring (the ring formed by the 4 residues of integers 2 2 = 0 One example of a ring with divisors or zero which doesn't contain any nilpotent elements is the ring of for a radix that is not the power of a prime. The terms idempotent and nilpotent were coined in 1870 by the American mathematician (1809-1880). Peirce (whose name rhymes with "terse" or "purse") taught at Harvard for nearly 50 years and is also remembered for proving (in 1832) that an odd perfect number (if such a thing exists) cannot have fewer than 4 distinct prime factors. Idempotent ring elements (2006-06-13) Characteristic of a Ring A The smallest positive p, if any, for which all sums of p like terms vanish. In a unital ring A, we may call "1" the neutral element for multiplication and name the elements of the following sequence after integers: 1, 2 = 1+1, 3 = 1+1+1, 4 = 1+1+1+1 ... (n+1) = n+1 ... If all the elements in this sequence are nonzero, the ring is said to have Otherwise, the vanishing integers are multiples of the least of them, which is called the characteristic of the ring, denoted char(A). The only ring of characteristic 1 is the trivial field (where 1 = 0). The characteristic of a nontrivial unital ring without divisors of zero is either 0 or a prime number. (HINT: any "integer" (1+1+...) corresponding to a prime divisor of a composite characteristic is a divisor of zero.) In particular, the characteristic of any nontrivial field (or skew-field) is either 0 or a prime number. of a non-unital ring is defined as the least positive integer p such that a sum of p identical terms always vanishes (if there's no such p, then the ring is said to have zero characteristic). Frobenius Map : If the characteristic p of a commutative ring is a prime number, we have: ( x y ) p = x p y p (x + y) p = xp + yp The former relation is due to commutativity. The latter relation comes from Newton's binomial formula, with the added remark that the binomial coefficient C(p,k) is divisible by p, if p is prime, unless k is 0 or p. The map defined by F(x) = xp thus respects both addition and multiplication. It is a ring homomorphism, which is called the Frobenius map in honor of Georg Frobenius (1849-1917) who discovered the relevance of such things to algebraic number theory, in 1880. The automorphism group of the GF(pn ) is a cyclic group of order n, generated by the above Frobenius map. (2006-02-15) Ideal I in a Ring A An ideal is a multiplicatively absorbent subring. A subring is a ring contained in another (using the same operations). An ideal is a subring that contains a product whenever it contains a factor. For a right ideal I, the product xa is in I whenever x is: Ia Ì I For a left ideal I, the product ax is in I whenever x is: aI Ì I Unless otherwise specified, an ideal is both a right ideal and a left ideal. The sum, the product or the intersection of two ideals is itself an ideal (the product of two ideals is contained in their intersection). The sum (or the product) of two sets is defined to be the set whose elements are sums (or products) of elements from those two sets. One example of an ideal is the set a A of all the multiples of an element a in the ring A is the ideal consisting of all even integers). An ideal which is thus "generated" by a single element is called a principal ideal. A ring, like , whose ideals are all principal is a Such a ring is called a principal integral domain (abbreviated PID) if it has no divisors of zero (i.e., the product of two nonzero elements is never zero). Following Bourbaki, some authors define a principal ring to be what we call a PID. Ideals were introduced in 1871 by Richard Dedekind (1831-1916) as he considered, in particular, what are now known as prime ideals: An ideal is defined to be prime if it doesn't contain a product unless it contains at least one of its factors (among integers, the multiples of a prime number form a prime ideal). The radical Rad(I) of an ideal I is the set of all ring elements which have at least one of their powers in I. The radical of an ideal is an ideal. An ideal which is the radical of another is called a radical ideal. In particular, every prime ideal is a radical ideal. There's no nilpotent residue modulo a radical ideal. (2006-02-15) Residue Ring (modulo a given ideal I of a ring A) The ring A/I, which consists of all residue classes modulo I. Modulo an ideal I of a ring A, the residue-class (or simply the residue) [x] of an element x of A is the set of all elements y of A for which x-y is in I. The set of all residues modulo I is denoted A/I. It is a ring, which is variously called quotient ring, factor ring, residue-class ring or simply residue ring. For example, / 4 is the ring formed by the four residue classes modulo 4, whose addition and multiplication tables are shown at right. (Note that "2" is a nilpotent divisor of zero.) The notation p instead of / is not recommended, as the former is best reserved for the ring of (2006-04-27) Cauchy Product A well-defined internal operation among sequences in a ring. The Cauchy product of two sequences (a0 , a1 , a2 , ...) and (b0 , b1 , b2 , ...) of elements from a ring A is the sequence (c0 , c1 , c2 , ...) where: c0 = a0 b0 , c1 = a0 b1 + a1 b0 , c2 = a0 b2 + a1 b1 + a2 b0 , etc. denoted A , of the sequences whose terms are elements of the ring A has the structure of a ring (the so-called formal power series over A) if endowed with direct addition (the n-th term of a sum being the sum of the n-th terms of the two summands) and the Cauchy multiplication defined above. The set which is denoted A( ) consists of those sequences which have only finitely many nonzero terms. It forms a subring of the above ring, better known as the [univariate] polynomials over A, denoted A[x] and discussed next. (2006-04-06) A[x] : Ring of formal polynomials over a ring A It's endowed with component-wise addition and Cauchy multiplication. A finite sequence of elements of a ring A (or, equivalently, a sequence with finitely many nonzero elements) is called a polynomial over A. The set of all such polynomials is a ring (often denoted denoted A[x] where x is a "dummy variable") which is a subring of the aforementioned ring of "formal power series", under direct addition and Each term of the sequence defining a polynomial is called a The degree of a polynomial is the highest of the ranks of its nonzero coefficients (the lowest rank being zero). The null polynomial ("zero") has no nonzero coefficients, and its degree is defined to be -¥ ("minus infinity") so that, in a ring without divisors of zero, the degree of a product is always the sum of the degrees. Formal Polynomials vs. Polynomial Functions : To a polynomial (a0 , a1 ... an ) of degree n, we associate a function f : However, that function and the polynomial which defines it are two different things entirely... For example, over the finite field GF(q), the distinct polynomials x and xq correspond to the same function. In other words, the map from polynomials to need not be injective. However, that map is indeed injective in the special case of polynomials over ordinary signed integers or any superset thereof, including rational, real, surreal or complex numbers (and p-adic numbers too, for good measure). Whenever the distinction between a polynomial and its associated function must be stressed, the former may be called a formal polynomial. Similarly, infinite sequences of coefficients are called formal power series those may or may not be associated with a convergent power series which would define a proper function... Over a noncommutative ring, the concept of polynomials does not break down, but the above association of a polynomial with a function is dubious. (2006-04-05) GR(q,r) : Galois ring of characteristic q = pm and rank r The modulo-q polynomials modulo an irreducible polynomial modulo p. Let q be a power of a prime p. Let f be some monic polynomial modulo q, of degree r, which is irreducible modulo p (i.e., f (x) mod p never vanishes). The Galois ring of characteristic q and rank r is (/q)[x] / f (x)
http://www.numericana.com/answer/rings.htm
13
67
|Lectures on Physics has been derived from Benjamin Crowell's Light and Matter series of free introductory textbooks on physics. See the editorial for more information....| |Home Vibration and Waves Free Waves Waves on a String| |Search the VIAS Library | Index| Waves on a String So far you have learned some counterintuitive things about the behavior of waves, but intuition can be trained. The first half of this section aims to build your intuition by investigating a simple, one-dimensional type of wave: a wave on a string. If you have ever stretched a string between the bottoms of two open-mouthed cans to talk to a friend, you were putting this type of wave to work. Stringed instruments are another good example. Although we usually think of a piano wire simply as vibrating, the hammer actually strikes it quickly and makes a dent in it, which then ripples out in both directions. Since this chapter is about free waves, not bounded ones, we pretend that our string is infinitely long. After the qualitative discussion, we will use simple approximations to investigate the speed of a wave pulse on a string. This quick and dirty treatment is then followed by a rigorous attack using the methods of calculus, which may be skipped by the student who has not studied calculus. How far you penetrate in this section is up to you, and depends on your mathematical self-confidence. If you skip the later parts and proceed to the next section, you should nevertheless be aware of the important result that the speed at which a pulse moves does not depend on the size or shape of the pulse. This is a fact that is true for many other types of waves. Consider a string that has been struck, (a), resulting in the creation of two wave pulses, (b), one traveling to the left and one to the right. This is analogous to the way ripples spread out in all directions from a splash in water, but on a one-dimensional string, "all directions" becomes "both directions." We can gain insight by modeling the string as a series of masses connected by springs. (In the actual string the mass and the springiness are both contributed by the molecules themselves.) If we look at various microscopic portions of the string, there will be some areas that are flat, (c), some that are sloping but not curved, (d), and some that are curved, (e) and (f). In example (c) it is clear that both the forces on the central mass cancel out, so it will not accelerate. The same is true of (d), however. Only in curved regions such as (e) and (f ) is an acceleration produced. In these examples, the vector sum of the two forces acting on the central mass is not zero. The important concept is that curvature makes force: the curved areas of a wave tend to experience forces resulting in an acceleration toward the mouth of the curve. Note, however, that an uncurved portion of the string need not remain motionless. It may move at constant velocity to either side. We now carry out an approximate treatment of the speed at which two pulses will spread out from an initial indentation on a string. For simplicity, we imagine a hammer blow that creates a triangular dent, (g). We will estimate the amount of time, t, required until each of the pulses has traveled a distance equal to the width of the pulse itself. The velocity of the pulses is then ± w/t. As always, the velocity of a wave depends on the properties of the medium, in this case the string. The properties of the string can be summarized by two variables: the tension, T, and the mass per unit length, μ (Greek letter mu). If we consider the part of the string encompassed by the initial dent as a single object, then this object has a mass of approximately μw (mass/length x length=mass). (Here, and throughout the derivation, we assume that h is much less than w, so that we can ignore the fact that this segment of the string has a length slightly greater than w.) Although the downward acceleration of this segment of the string will be neither constant over time nor uniform across the string, we will pretend that it is constant for the sake of our simple estimate. Roughly speaking, the time interval between (g) and (h) is the amount of time required for the initial dent to accelerate from rest and reach its normal, flattened position. Of course the tip of the triangle has a longer distance to travel than the edges, but again we ignore the complications and simply assume that the segment as a whole must travel a distance h. Indeed, it might seem surprising that the triangle would so neatly spring back to a perfectly flat shape. It is an experimental fact that it does, but our analysis is too crude to address such details. The string is kinked, i.e. tightly curved, at the edges of the triangle, so it is here that there will be large forces that do not cancel out to zero. There are two forces acting on the triangular hump, one of magnitude T acting down and to the right, and one of the same magnitude acting down and to the left. If the angle of the sloping sides is θ, then the total force on the segment equals 2T sin θ. Dividing the triangle into two right triangles, we see that sin θ equals h divided by the length of one of the sloping sides. Since h is much less than w, the length of the sloping side is essentially the same as w/2, so we have sin θ = 2h/w, and F=4Th/w. The acceleration of the segment (actually the acceleration of its center of mass) is a = F/m = 4Th/μw2 . The time required to move a distance h under constant acceleration a is found by solving h= 1/2 at 2 to yield Our final result for the velocity of the pulses is The remarkable feature of this result is that the velocity of the pulses does not depend at all on w or h, i.e. any triangular pulse has the same speed. It is an experimental fact (and we will also prove rigorously in the following subsection) that any pulse of any kind, triangular or otherwise, travels along the string at the same speed. Of course, after so many approximations we cannot expect to have gotten all the numerical factors right. The correct result for the velocity of the pulses is The importance of the above derivation lies in the insight it brings -that all pulses move with the same speed - rather than in the details of the numerical result. The reason for our too-high value for the velocity is not hard to guess. It comes from the assumption that the acceleration was constant, when actually the total force on the segment would diminish as it flattened out. Rigorous derivation using calculus (optional) After expending considerable effort for an approximate solution, we now display the power of calculus with a rigorous and completely general treatment that is nevertheless much shorter and easier. Let the flat position of the string define the x axis, so that y measures how far a point on the string is from equilibrium. The motion of the string is characterized by y(x,t), a function of two variables. Knowing that the force on any small segment of string depends on the curvature of the string in that area, and that the second derivative is a measure of curvature, it is not surprising to find that the infinitesimal force dF acting on an infinitesimal segment dx is given by (This can be proven by vector addition of the two infinitesimal forces acting on either side.) The acceleration is then a =dF/dm, or, substituting dm=μdx, The second derivative with respect to time is related to the second derivative with respect to position. This is no more than a fancy mathematical statement of the intuitive fact developed above, that the string accelerates so as to flatten out its curves. Before even bothering to look for solutions to this equation, we note that it already proves the principle of superposition, because the derivative of a sum is the sum of the derivatives. Therefore the sum of any two solutions will also be a solution. Based on experiment, we expect that this equation will be satisfied by any function y(x,t) that describes a pulse or wave pattern moving to the left or right at the correct speed v. In general, such a function will be of the form y=f(x-vt) or y=f(x+vt), where f is any function of one variable. Because of the chain rule, each derivative with respect to time brings out a factor of ± v . Evaluating the second derivatives on both sides of the equation gives Squaring gets rid of the sign, and we find that we have a valid solution for any function f, provided that v is given by |Home Vibration and Waves Free Waves Waves on a String|
http://www.vias.org/physics/bk3_03_03.html
13
75
how temperatures are measured By Dr J Floor Anthoni (2010) (This chapter is best navigated by opening links in a new tab of your browser) Note! for best printed results, read tips for printing. For corrections and suggestions, e-mail -- Seafriends home --- climate index -- global issues -- Rev:20100416,20100816,20110105,20110703, Measuring temperature should be a most simple scientific exercise, which a primary school student could do to full satisfaction. It is therefore a surprise that it becomes a major problem to do right, in such a way that temperatures all over the world can be compared, and stored in a database. Today there are still two temperature scales in use: Fahrenheit (UK previously and USA still today) and Celsius (the rest of the world). The Fahrenheit scale has been replaced scientifically by the Celsius scale (called Centigrade in the UK and USA), and later by the Kelvin scale, which has identical one degree steps. Temperature is an important quality in daily life, science and industry. Just about all processes depend on temperature because heat makes molecules move or vibrate faster, resulting in faster chemical reactions. Heat is wanted and wasted, and so is cold. When substances are cold, the processes within proceed more slowly, as in chilled or frozen foods. It does not surprise therefore that many ways have been invented to measure and control temperature. |Based on known extension of a known substance When a substance (solid or liquid or gaseous) is heated, it extends or expands (with few exceptions). When such an extension can be seen, a thermometer can be made. Substances with high expansion coefficients are of course most suitable but there are other requirements. The mercury thermometer is the classical thermometer, based on the known expansion of mercury, a liquid metal. Its principle is simple: a (relatively large) volume of mercury inside a rigid class 'bulb' is warmed and expands into a narrow capillary tube of rigid glass. The larger the bulb and the smaller the capillary, the more sensitive the instrument becomes. Medical mercury thermometers are capable of measuring to tenth of a degree Celsius. The mercury thermometer has the following properties: + mercury expands easilyThe alcohol thermometer is also widely used, with the following properties: + it conducts heat easily, being a liquid metal + it is silvery opaque and clearly visible + it does not stick to glass + a minimum-maximum thermometer can be made with it + it has a high boiling point (357ºC) and can thus be used for high temperatures - it freezes at -39ºC and this could cause the bulb to crack - it is relatively expensive - it is considered an ecological hazard, even though liquid mercury is harmless + it expands easily, even more than mercuryThe Six's maximum and minimum thermometer is a clever use of an alcohol bulb thermometer with some mercury in its capillary, topped up with more alcohol and ending in an empty bulb with some vacuum. Because mercury is so dense, a magnetic metal needle will float on it, and can be pushed against some friction (a metal back plate). At maximum temperature the furthest needle will stay behind, attracted by the metal backing plate. Likewise at minimum temperature, the closest needle will stay behind. After reading the thermometer, the two needles can be re-set (drawn onto the mercury level) with an external magnet, or by pushing the metal back plane away from the magnetic needles, which then descend by the pull of gravity. The Six's thermometer has the advantages and disadvantages of both mercury and alcohol thermometers. But its capillary must be wide enough to place the metal floating pins, which means that it cannot be read very accurately (0.5ºC is difficult). - it is not a good conductor of heat + it can be coloured in any colour to be easily visible - it has a low boiling point of +78ºC + it has a low freezing point of -112ºC and is suitable for low temperatures + it is inexpensive - it wets glass and gives a less precise readout + it is not harmful to the environment Please note that bulb thermometers are sensitive to outside pressure and are thus less suitable for deep sea temperature measurements, unless they are encased inside a rugged mantle. still to do: drawing of these thermometers The industrial bulb thermometer consists of a relatively large copper bulb with long capillary tube that can be bent and guided through the innards of an appliance. At its end it has a tiny pressure sensor (manometer) which operates an electrical switch. With a screw its setting can be altered. These thermo-controllers are extensively used in air conditioners, washing machines and other appliances. A metal spring thermometer can be made by coiling a metal strip with an indicator attached to its loose end. When the strip expands, the coil unwinds somewhat, which moves the indicator. This kind of thermometer is useful where a wide range of temperatures needs to be measured with low accuracy, as in cooking food and for ovens. The bi-metal thermometer is based on the difference in extension between two metal strips, sandwiched together and riveted or spot-welded at both ends. This causes the strip to bend when temperature changes. The strip can be bent, folded or coiled to amplify its effect. Bi-metal thermometers are extensively used in temperature controllers to switch electrical devices like warmers and coolers on or off. They are less suitable for absolute temperature measurement. Some bi-metal thermometers are dimpled to give a click-clack effect, a positive transition at a certain temperature (click), but with hysteresis (lagging behind) when clacking back. Temperature also makes electrons move faster inside conductors like metal, thereby changing their resistance. The platinum resistance thermometer is based on its resistance changing precisely with temperature. The change in resistance can be measured with an electronic circuit and amplified as an electrical signal and shown on a voltage indicator. To minimise external influences like supply voltage variations, a 'bridge' circuit is used which essentially measures the difference in voltage between the platinum resistance and another known resistance. Because platinum is a noble metal, the thermometer is very stable while able to operate under a very wide range of temperatures. For ultimate precision, linearising circuits are applied, and the 'known' resistor may be kept at a known temperature. The thermocouple thermometer is based on the difference in conductivity (electron mobility) between two metals, brought into contact with one another or spot-welded together. When two dissimilar conductors are brought together, a voltage difference occurs, which can be measured. When warmed, the voltage increases due to a higher electron mobility. Thermocouple thermometers can measure a large range of temperatures and are very stable. They are also independent of the contact area, and are thus easy to make. They are also insensitive to outside pressures. However, thermocouples occur in pairs and one of them must be kept at a constant known temperature. When thermocouples are stacked in series, their sensitivity increases proportionally, known as a thermopile. They can be used for measuring heat flow. The thermistor thermometer is based on the conductivity of a semiconductor, which is quite sensitive to temperature. So it acts like a resistance thermometer. Unfortunately the resistance change is not linear and can be corrected only to some degree. It also has a very limited range. Thermistor thermometers are suitable for measuring the temperature of living organisms, like humans. They can be made rather small (less than 1mm). Infra-red thermometers measure the infra-red (IR) radiation of substances. Therefore they do not need to be in direct contact with them. But the measured object must be warmer than the infra-red detector. So they are more suitable for measuring high temperatures at a safe distance. By cooling the IR detector to a known temperature, also lower temperatures like that of living organisms, can be measured. Note that the CO2 in air absorbs IR radiation, which limits their use. The accuracy of IR thermometers Passive infra-red (PIR) detectors also detect warmer-than-air objects, but they are used for detecting movement of such objects, and not their precise temperature. |The Stevenson Screen The Stevenson screen was designed by Thomas Stevenson (1818-1887), a British civil engineer, in order to more accurately measure air temperatures rather than side effects like solar irradiation heating up the thermometers. To reflect heat back, it is painted white, but better still would have been reflective aluminium. It has louvered sides to let the air through but not the sunlight. Once it became an accepted standard, the Stevenson Screen is now spread all over the world. It now allows temperatures to be compared wherever they are measured. A lot of thought and experience went into its design: the door swings down rather than to one side so that the wind won't catch it on windy days and rip it off the hinges, and it opens facing north, to keep the sun from shining directly on the thermometers while reading the thermometers. Inside it one finds two normal thermometers (alcohol for cold areas, mercury for warm places), but one of these has its bulb wetted by a wick soaked in a bottle of water. This wet bulb thermometer gives an indication of evaporation, because evaporation of water causes cooling. There is usually also a max-min thermometer. The thermometers are placed such that they can be read with ease and replaced with minimum effort. An important consideration is also that the louvered box stands a fixed distance above the ground, for least interference with low objects that may impede wind flow (and snow). |Temperature reading errors Suppose we have stations with the finest thermometers inside the most standard Stevenson screens and located in rural areas, away from urban disturbances, then surely, readings must always be accurate? They are not, for various reasons: In a paper scientists are reminded of the natural uncertainty (or inaccuracy) in thermometer measurements, arising from reading errors, instrument errors, time of day errors, poor location and weather short-term fluctuations. It creates a band of almost 1 degree C around observations. In scientific terms, it means that it cannot be said with certainty that the world has warmed since 1880. Draw a horizontal line from just above 0 on left to right and it will traverse through the grey envelope. In the words of the authors: "The ±0.46ºC lower limit of uncertainty shows that between 1880 and 2000, the trend in averaged global surface air temperature anomalies is statistically indistinguishable from 0 C at the 1-sigma level [half the width of the grey envelope]". One cannot, therefore, avoid the conclusion that it is presently impossible to quantify the warming trend in global climate since 1880." Frank, Patrick (2011): Uncertainty in the Global Average Surface Air Temperature Multiscience vol 21/8 http://multi-science.metapress.com/content/c47t1650k0j2n047/?p=7857ae035f62422491fa3013c9897669&pi=4. not free. do we measure? What do we measure with Stevenson Screen meteorological thermometers? The problems with temperature measurements do not end with the ones described above, because the real question is what do they measure? It is claimed that they measure Earth's surface temperature, but is that really so? What do the maximum and minimum temperatures tell us? Is the day's average equal to the middle between maximum and minimum? The graphs show some of the problem. A day begins with the blue curve of net sunlight beginning just before six in the morning and ending just after six in the evening (apparently in spring). It doesn't take long before the air begins to warm too (sensible heat, orange) due to the warming of the surface, and later still some evaporation happens (latent heat, cyan). But watch what infrared out-radiation does (net IR, magenta), shown upside down because it goes out rather than in. It increases somewhat dusing the day and is still present at night, in total area equalling that of sensible heat (conduction and convection). In other words, the idea of infra-red out-radiation from the surface is only half supported by measurements. The part that does, is soon absorbed by air molecules and converted into sensible heat. Source . |This graph shows measured temperatures during a single year. MSAT means Meteorological Surface Air Temperature, the temperature inside the Stevenson Screen. It has two outomes, Min MSAT, the minimum temperature (black) and Max MSAT, the maximum (magenta). The average between these is considered the surface temperature for the global temperature datasets. But as you can see, it does not represent the actual surface temperature, 1.5m lower, shown in blue (Max) and yellow (Min). The average between these two is considerably larger. Also note that the Min MSAT follows the minimum surface temperature and that Max MSAT comes close to the real average..| Roy Clark (2010): What surface temperature is your model really predicting? http://hidethedecline.eu/media/BLANDET/What%20Surface%20Temperature%20V2_R%20Clark_9%2020%2010.pdf Roy Clark (2010): It Is Impossible For A 100 ppm Increase In Atmospheric CO2 Concentration To Cause Global Warming.http://venturaphotonics.com/GlobalWarming.html. |Urban Heat Islands It is human nature to change his environment for maximum comfort, which means shutting out the nasty aspects of weather like rain, cold wind and intolerable heat. So where people live, one finds wind breaks, shading trees, houses, roofs, concrete, parkings, roads, air conditioners, cars, air planes, all contributing to a change in air temperature. And they all cause extra heat. Where Stevenson screens once stood isolated in a meadow, over time they find themselves surrounded by civilisation, causing the air temperature to rise. This is called the Urban Heat Island effect, which can corrupt temperature data substantially. |This image (courtesy Anthony Watts) shows the urban heat island effect over Reno California USA before midday. The temperature measured varies from 47-57ºF (by 5ºC). so the question is what is THE temperature of Reno? Is it the average (51) or the minimum (47)? Clearly, the UHI causes a formidable difference between cities and rural places and more so with bigger cities. Its main problem lies in its unpredictability from place to place and over time.| |Tokyo with its 18 million inhabitants and massive urbanisation and transport systems, has a very significant UHI signature, as shown in this graph (from Anthony Watts). It has increased by a massive 3ºC in the past century and is still increasing further. By comparison nearby Hachijo island which has also suffered some urbanisation, shows a modest temperature increase of less than 0.5ºC in a century. Which of the two stations would you exclude from a world temperature database? Guess what the people of Tokyo are more interested in? Note also that temperature swings (a decadal cycle) are larger at Hachijo, perhaps caused by swings in sea temperature.| |The graph shown here was derived from 47 counties in California, averaging their temperature trends for the period 1940-1996 and plotting them against their population size. Rural stations on left and urban stations on right. From the data points a straight line can be drawn which would cross the zero temperature trend. Also shown on this graph are the six stations used by NASA GISS from which global averages are calculated. As can be seen, five out of six are located where a significant Urban Heat Island (UHI) effect is experienced, of about 0.6 degrees. Not shown is the historical growth of these counties over the 56 years, but it is evident that much of 'global warming' consists of the UHI. Many similar studies exist, all consistently showing that UHI seriously pollutes the instrumental record.| |In 1996 Goodridge grouped Californian counties by population size and obtained these three temperature curves for the 20th century, using standard temperature datasets. Once more it showed that population density (UHI) is the main contributor to 'warming'.| On a daily basis, 1600 weather balloons are released from 800 stations, usually at the same time: 0:00 UTC and 12:00 UTC. The 2m diameter rubber latex balloon is filled with hydrogen gas. Its mission is to measure temperature, relative humidity and pressure, which are used for weather forecasting and observation. Modern weather balloons can now also measure position and wind speed by using GPS positioning. |The advantage of weather balloons is that they truly measure the air's temperature, unaffected by Urban Heat Island effects. Satellite temperature measurement also have this advantage, but cannot measure over a range of altitudes. This graph compares the three methods over a period of 20 years. Note how balloons and satellites agree, and how the surface temperatures show an urban heat island effect of some +2 degrees. Not shown is how regular adjustments aim to bring these measurements into agreement. For instance, the starting point in this graph has been aligned this way, and perhaps 1998 as well.| NOAA National Weather Service Radiosonde Observations http://www.webmet.com/ Meteorological Resource Centre. Met Monitoring Guide: http://www.webmet.com/met_monitoring/toc.html chapter 9.1..2 Ocean surface temperatures have been measured by ships for several centuries. First it was done by collecting surface water in a bucket while steaming on, but later the engine's cooling water inlet was used. Unfortunately this made a difference, because the water inlet is at some depth under water. Today this may serve to advantage because satellite can measure only the top few centimetres of the sea because infrared radiation is rapidly absorbed by water. Because water continually evaporates from the sea, the surface film is somewhat colder than a few metres down. This map from Reynolds (2000) shows where the ships' tracks are, and that their measurements are in no way representative of the entire oceans. |The graph shows both land and ocean temperatures from thermometers, since 1880. As can be seen, the land temperature rises more steeply than the sea temperature, most likely caused by the Urban Heat Island effect. Even so, both follow similar oscillations; a steep short decline followed by a long slow incline. The sea warms by about 0.5 degrees per century whereas the land warms by about 1.2 degrees per century. Compare this with the UHI effect of Tokyo above. What is omitted from this graph is the steep decline before 1880.| |Ocean temperature buoys Since the year 2000, and benefiting from technological advancement, an aggressive programme was begun to measure the oceans entirely, with tide gauge stations, moored buoys, drifters and ships of opportunity. The ARGOS satellite system circles Earth to collect the data, while the AOML has responsibility for the logistics of drifter deployment and quality control of the resulting data . The map shows the locations of ARGOS drifters from the USA (blue) and UK (red/orange). Of course their positions change daily. A main advantage of the ocean drifters is that they collect data of the air as well as the sea at various depths, and entirely without human error. |A drifting buoy is an inexpensive, autonomous device which is deployed by ships of opportunity. Distributed throughout the oceans of the world, it is designed to drift freely with the ocean surface currents, has an average lifetime of more than a year, and can measure sea surface temperature, surface currents, and sea level pressure. The buoy is a round sphere of about 0.5m diameter, from which an array of cables and sensors hangs. It measures temperature, salinity and ocean currents. The collected data are then transmitted back to shore via satellite. In July 1995, data were logged from more than 750 buoys.| An expendable bathythermograph (XBT) is another inexpensive device which is also deployed by ships of opportunity. An XBT is a small instrument that is dropped into the ocean from a ship. During its descent at a constant rate, an XBT measures the temperature of the seawater through which it descends, and sends these measurements back to the ship through two fine wires that connect the ship to the instrument. XBTs generally have a depth limit of 750 meters, but some reach depths of 1800 meters. Many ships relay summaries of the vertical profiles of temperature back to the shore by satellite. Meteorological centers throughout the world receive data from both the XBTs and the buoys via a global communications network, and use it to prepare the analyses that are essential for forecasts of weather and climate. The complete vertical temperature profiles are sent to data collection centers after the ships reach port. The Upper Ocean Thermal Center at AOML has responsibility for quality control of an average of 2,000 XBTs per month. The latest drifters are semi-autonomous, being capable of making deep dives to 200m, drifting there for 9 days, and surfacing at intervals to transmit their data and recharge their batteries. Over 3000 of these autonomous drifters have been released so far. As their technology becomes more sophisticated, they could perhaps at some time also measure clarity, light extinction with depth, pH, pCO2, plankton concentrations, oxygen and carbon fluxes, |Satellite Sea Surface Temperatures (SST) Since satellites began to be used for measuring environmental variables (GOES), both land and sea temperatures have been measured with good accuracy. The map here shows average ocean temperatures for a given year. It is important to remember that this represents only the very thin surface of the oceans. The advantage of satellite measurements is that they truly cover the whole of the world. Their disadvantage is that they cannot measure absolute temperatures, and that they vary slowly with time (drifting). http://www.aoml.noaa.gov/general/ Atlantic Oceanographic & Meteorological Office AOML. http://www.aoml.noaa.gov/phod/dac/gdp.html Global drifter program http://www.aoml.noaa.gov/phod/dac/2006_gdp_report.pdf An impressive report on the ocean drifter programme (PDF) slideshow. The places where thermometers are placed were never selected with a view of collecting a representative set of temperatures from which the world's average could be calculated. They are simply located where people live, and that introduces the urban heat island effect. The two maps below, show that the world is not adequately or evenly covered. To make matters worse, many temperature stations are pretty recent and do not have a long-term record. Others do not satisfy stringent quality requirements. |Averaging the temperature data From the above maps one can see that it is impossible to arrive at an average temperature for every square on the grid. Besides, the squares become smaller towards the poles (but this can be accounted for). Yet this is precisely what NASA (USA), and the Climate Research Unit (UK) have done, with disastrous results. These results were then used in the IPCC reports as if they were reliable. To make matters worse, these scientists have been 'adjusting' the original data to fit their expectations. It is important to remember that 'world average' temperatures mean less than a good time series of a single remote station. It also implies that the evidence from thermometers to support 'global warming', is entirely unreliable. There is also a thermodynamic 'finer point': if one wishes to know the effective out-radiation, which is proportional to the fourth power of absolute temperature (T x T x T x T), then this should be taken into account, making the effective temperature noticeably larger than the average temperature. Finally, were average temperatures to have any meaning, it should also be related to the heat content where it was measured. Ice caps and oceans have large latent heat, whereas deserts have low latent heat. Thus in climatology, one should be very cautious about 'temperature averages'. For various known and unknown reasons, the chemical elements found on Earth have 'sister' elements or isotopes (Gk: isos=equal; topos= place; as in the same place in the periodic table of elements). Isotopes behave chemically alike but have different bulk (different number of neutrons). Some isotopes are unstable and fall apart by radioactive decay (alpha, beta or gamma radiation). One of the best known isotopes is radioactive carbon-14 which is created in the atmosphere from the element nitrogen. Because of its beta-decay (emitting an electron) and half-life of about 5000 years, it is extensively used in radio-carbon dating of biological substances (wood, shell, hair, etc.). Carbon-14 measures time rather than temperature. Note that the correct notation for isotope carbon-14 is: 14C Tip: for the ºdegree symbol hold the ALT key while typing 167 (ALT+167) Similarly ‰ = ALT+0137 and the ñ in La Niña = ALT+164. Micro µ = ALT+0181 Beta ß = ALT+0223 Beryllium is the fourth element in the Periodic Table, after Lithium and before Boron. It has an atomic mass of 9, made up by 4 protons and 5 neutrons. It can be made as a fragment from heavier elements (nitrogen 14, oxygen 16) by cosmic bombardment (spallation) which expels protons and neutrons. Also cosmic radiation itself contains beryllium. Radioactive Beryllium-10 has a half-life of 1.51 × 106 years, and decays by beta decay to stable Boron-10 with a maximum energy of 556.2 keV. |This figure shows two different proxies of solar activity during the last several hundred years. In red is shown the Group Sunspot Number (Rg) as reconstructed from historical observations by Hoyt and Schatten (1998a, 1998b). In blue is shown the beryllium-10 concentration (10E4 atoms/(gram of ice)) as measured in an annually layered ice core from Dye-3, Greenland (Beer et al. 1994). Beryllium-10 is a cosmogenic isotope created in the atmosphere by galactic cosmic rays. Because the flux of such cosmic rays is affected by the intensity of the interplanetary magnetic field carried by the solar wind, the rate at which Beryllium-10 is created reflects changes in solar activity. A more active sun results in lower beryllium concentrations (note inverted scale on the blue plot). Note that the sun's variability is much more than suggested by the satellite record (the solar constant).| Oxygen-18 or 18O has two extra neutrons instead of the usual 8 (10n+8p). It is a mysterious element that occurs in concentrations of around 0.2% and is stable (not radioactive). Practical measurements have shown that it correlates with temperature: higher concentrations mean lower temperatures, but the why and how eludes somewhat. The graph shows 18-O variations in foraminifers which are usually found on sea bottoms in the shallow coastal zone. Present thinking is that colder temperatures cause ice caps to expand, which are deficient in O-18, leaving the sea more abundant in 18-O. Thus the delta-18-O measures the amount of ice in ice caps rather than actual surface temperature. As a consequence, the 18-O signature lags many hundreds of years behind surface temperature. When Earth is cooling, water is transported through air to the ice caps, so the time lag is maximal as also the rate of the 18-O signature is more gradual than that of surface temperature. When Earth is warming, ice caps melt and meltwater flows almost instantaneously back to the sea. So the warming part of the 18-O signature lags less and changes more steeply. Scientists use the symbol delta for the Greek letter 'd', for differences in quantities. The variations in isotopes are expressed as a percentage (%) (or promille ‰) and calculated the way one would calculate relative profit: profit (%) = ( (sales -cost)/ cost) x 100%Likewise the delta-18-O ‰ = ((measured value - standard value)/ standard value) x 1000 ‰ where the standard value is either a standard sample (as in PeeBee Belemnite for 13-C) or any other sample. Carbon-13 is a natural stable isotope of carbon and has one extra neutron (7n + 6p). It makes up about 1.1% of all natural carbon on Earth. Whereas isotopes are normally detected by mass spectroscopy, carbon-13 can sensitively be detected with Nuclear-Magnetic Resonance (NMR). It is also a mysterious isotope that is preferentially avoided by plants. Thus wherever 13-C is used, there is less of it. C-13 is always measured against a world standard called PeeBee Belemnite or similar. Belemnite is a calcium-rich deposit from the soft internal shells of ancient belemnite inkfish, with a delta-13-C agreed to be the zero base. The diagram shows typical concentrations (almost always negative), and where they occur. Note that the modern 'grasses' (maize, sorghum, sugarcane) have a four-step photosynthetic process (C4) which is more efficient than the much more common three-step (C3) process, but requires more warmth. See our soil section for more. 12-C and 13-C can be used as temperature tracers that explain ocean circulation. Plants find it easier to use the lighter isotopes (12-C) when they convert sunlight and carbon dioxide into food, thus large blooms of plankton (free-floating organisms) draw large amounts of 12-C into the oceans. If those oceans are stratified layers of warm water near the top, and colder water deeper down) the water cannot circulate, thus when the plankton dies it sinks and carries 12-C with them, making the surface layers relatively rich in 13-C. Where the cold waters well up from the depths (North Atlantic) it carries the 12-C with it. Thus, when the ocean was less stratified than today, there was plenty of 12-C in the skeletons of surface-dwelling species. Other indicators of past climate include the presence of tropical species, coral growths rings, etc. Due to differential uptake in plants as well as marine carbonates of 13-C, it is possible to use these isotopic signature in earth science. In aqueous geochemistry, by analyzing the delta-13-C value of surface and ground waters the source of the water can be identified. However, there are some insurmountable problems with this isotope for detecting a 'human footprint' in CO2: |13-C/18-O clumped-isotope geochemistry There is a slight thermodynamic tendency for heavy isotopes to form bonds with each other, in excess of what would be expected. Thus the occurrence of a CO2 molecule made up of one 13-C atom, one 18-O atom and one normal 16-O atom, adding up to a molecular weight of 47 (13+18+16) is just common enough to be used to detect temperature changes. Lab experiments, quantum mechanical calculations, and natural samples (with known crystallization temperatures) all indicate that delta-47 is correlated to the inverse square of temperature. Thus delta-47 measurements provide an estimation of the temperature at which a carbonate formed. 13-C/18-O paleothermometry does not require prior knowledge of the concentration of 18-O in the water (which the delta18-O method does). This allows the 13C-18O paleothermometer to be applied to some samples, including freshwater carbonates and very old rocks, with less ambiguity than other isotope-based methods. The method is presently limited by the very low concentration of isotopologues of mass 47 or higher in CO2 produced from natural carbonates, and by the scarcity of instruments with appropriate detector arrays and sensitivities. Beryllium-10 http://www.onafarawayday.com/Radiogenic/Ch14/Ch14-3.htm In the previous chapter we've discussed isotopes to measure temperature and, strictly spoken, these are also proxies (L:procurare= to cure, to deal with. proxy= substitute, delegate, representative) even though they are methods rather than substitutes. Here we'll look at various ways scientists have tried to measure past temperatures. This graph from Globalwarming Art (after Huang & Pollack, 1998) shows a borehole temperature reconstruction (showing 1ºC warming), aligned with the trace from the instrumental record from Brohan et al. 2006, (which shows the most warming of all instrumental records, watch out!). The graph goes back some 500 years, but the further back in time (depth), the bigger the error rate and the flatter the curve, as also details disappear. The basis for borehole temperature measurements stems from the fact that rock is a very poor temperature conductor, but eventually, over time, a small temperature change will happen deeper down. |The year before (1997) the same authors (Huang & Pollack) produced a radically different graph, from the same 6000 boreholes and this one showed the Little Ice Age and the Medieval Warm Period earlier on. The 1998 publication selected 358 boreholes out of the qualifying set of 6000. What made the authors change their minds? The hockey stick was published in 1998. Co-incidence? Peer pressure? Fraud?| + direct measurement of temperature; no proxies.The graph shows how difficult it is to make sense of borehole temperature data. In fact, it makes little sense. Researchers try to work backwards from the borehole data, using computer models, to a surface temperature record that looks plausible. This is not reliable. Look at the grey cluster of actual measurements to notice that nearly half the samples disagree with the other half. In other words, they disprove what the others are saying. In real science one cannot average such disagreements to arrive at a single agreement. It is called nonsense. "How many lies does one need to average to arrive at a single truth?" - Floor Anthoni http://www.co2science.org/subject/b/summaries/boreholes.php a balanced account of various borehole measurements by various scientists. http://www.ncdc.noaa.gov/paleo/borehole/borehole.html University of Michigan global database of boreholes. Some of the ice masses on Earth have remained for hundreds of thousands of years, like on Antarctica and Greenland. An ice core is drilled with a hollow core drill, in 6m sections at a time. The technique is surprisingly difficult and has been improved over time. The ice mass consists of layers accumulated from snow on top. As layer upon layer forms, the lower layers experience pressure and compaction. At some depth the firn (loose ice and snow) becomes compacted enough such that enclosed air becomes isolated. From here on the ice remains surprisingly similar in texture, with year bands, until a zone is reached where the ice 'flows' as described in part2/glaciers. From here on the age of the ice can no longer be ascertained from year bands. Some trees grow very old, and within their stems they somehow have traces of ancient climates. The width of tree rings represent growth rate, and are thought to agree with temperature because trees grow faster when it is warmer. But such trees depend even more on thaw, cloud level, nutrient availability, sunlight, moisture, CO2, root space, root competition and bacterial activity. A tree surrounded by larger trees, receives less light. During droughts trees won't grow and may die. In other words, the widths of tree rings are poor proxies for ancient temperatures. comments about CRU tree ring 'hockey stick' as used by the IPCC The infamous hockey stick graph produced by Mann, Bradley & Hughes (98), and used by the IPCC in their Third Assessment Report as the 'smoking gun' of Global Warming, has been criticised and rebutted scientifically: McKitrick : ".. our model performs better when using highly autocorrelated noise rather than proxies to ”predict” temperature. The real proxies are less predictive than our ”fake” data." McShane and Wyner : "We find that the proxies do not predict temperature significantly better than random series generated independently of temperature. Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts. Finally, the proxies seem unable to forecast the high levels of and sharp run-up in temperature in the 1990s either in-sample or from contiguous holdout blocks, thus casting doubt on their ability to predict such phenomena if in fact they occurred several hundred years ago." - "Furthermore, it implies that up to half of the already short instrumental record is corrupted by anthropogenic factors, thus undermining paleoclimatology as a statistical enterprise." Calcite or calcium carbonate (CaCO3) is a common building material for sea creatures. Because it has both carbon and oxygen, it can be used for the carbon-14 (time) and oxygen-18 (temperature) proxies. Dripstones or stalagtites (hanging down) and stalagmites (below) form where ground water drips from a ceiling. Dissolved in the groundwater are several minerals, among which dissolved limestone. As the water slowly drips down, while pausing at a low point of the stalagtite (the upper part hanging down from the ceiling), some of the water may evaporate, leaving a little bit of limestone behind at a rate of 0.1-3mm per year. Because moisture has an annual cycle, year rings can be seen. At the bottom a stalagmite is formed, and at some time the two meet. Dripstones are surprisingly hard. The stalagmites have a more consistent form because droplets splatter and moisture is spread more evenly. Dissolution of limestone: CaCO3(solid) + H2O + CO2(aq) => Ca(HCO3)2(aq) Formation of limestone: Ca(HCO3)2(aq) => CaCO3(solid) + H2O + CO2(aq) Foraminifers (L:foramen= a hole; Gk: phero= to bear; hole-bearers) are complex single-celled animals, mostly living on the sea bottom, particularly in the shallow coastal zone. They occur in a great variety of species, often in zones defined by subtle changes in living conditions. All have a hard outer skeleton made of calcite, riddled with holes through which they extend long hairy arms for feeding and for moving slowly. Corals are animal polyps that live in clear sun-lit waters in symbiosis with plant cells within their skins. They build extensive coral skeletons that join up to make coral reefs. The individual hard corals are joined up by crustose calcareous algae which are technically red sea weeds that also build limestone skeletons. As coral reefs grow, they incorporate a chemical history of the atmosphere, but their mass is too chaotic. But there are some coral colonies that slowly grow to massive forms of several metres tall and wide, like Porites corals. These are called 'massive' corals even though their polyps remain small. Their mass is neatly ordered in growth rings like those of a tree, and can be used for analysis. One coral analysis has been dissected on this web site and is worth studying (Declining coral calcification ..). http://en.wikipedia.org/wiki/Proxy_(climate) about climate proxies http://www.ncdc.noaa.gov/paleo/borehole/borehole.html University of Michigan global database of boreholes. temperature in perspective Average global temperature has little meaning without viewing it in perspective, which is what Australian wine maker Erland Happ did from publicly available NCEP data . As a wine maker he noticed that Australia has been cooling rather than warming, and he set out on a quest to understand what the story is. He divided the world into three zones, the arctic where hardly anyone lives (blue zone), the northern hemisphere where most of the world lives (green zone), and the southern hemisphere down to where no more people are found (red zone). His results are shown in the three panels below. A number of things strike immediately: Erland Happ (2011): The character of climate change, part 2. http://wattsupwiththat.com/2011/08/16/the-character-of-climate-change-part-2. 1. Must read. In the chapters on Urban Heat Island and thermometer locations above, we've seen that the instrumental temperature dataset is rather primitive and not representative of global temperature. But at least those from rural stations could have shown credible temperature trends. Unfortunately the institutions charged with collecting temperature data, have been making adjustments, in order to show global warming. In this chapter we'll examine how they've done that and to what extent. As one can see, the climate data is in the hands of a very few actors, which invites for corrupting the data towards political ends. Fortunately much of the data is freely available (after adjustments), even though much also has been kept under wraps (CRU), as exposed by the Climategate scandal. Determined skeptics like Ross McKitrick, Stephen McIntyre, Anthony Watts, Joe d'Aleo, Fred Singer, John Daly and many others, managed to show how much the temperature data has been corrupted, mainly in four invisible ways: Q: Where would you safely store precious ice cores? A: In the desert (UCAR, Boulder, Colorado USA) [Ross McKitrick (Jul 2010): A Critical Review of Global Surface Temperature Data Products. For more detail about how temperature data is collected, stored and corrected, and the anomalies discovered. http://rossmckitrick.weebly.com/uploads/4/8/0/8/4808045/surfacetempreview.pdf. PDF 78pp] |Rural USA temperature The graph here shows average temperature over the USA from 1895 to 1996, spanning a whole century. Even though it includes urban thermometers, it shows no appreciable rise in temperature. The 1960-1970s were cooler whereas the 1930-1940s were warmer. Unanimously, rural records have shown no significant rise in temperatures. Please note that this is a very important scientific test of the AGW hypothesis, since any exception to the hypothesis (global + warming) disproves it. We may ask ourselves why the scientific method has been abandoned when it comes to global warming. John Daly (2006): What The Stations Say. http://www.john-daly.com/stations/stations.htm - check if you can find any that show systematic warming. Excellent world-wide database. |Central Europe Temperature Visit http://news.thatsit.net.au/Science/Climate/Global-Temperatures.aspx for more thermometer sites around the world, showing basically no significatn warming either. Reader please note that the scientific method protects against nonsense. It goes as follows: "It doesn't take 100 scientists to prove me wrong, it takes a single fact'." - Albert Einstein "It is a typical soothsayer's trick to predict things so vaguely that the predictions can hardly fail: that they become irrefutable." - Sir Karl Popper We'll now investigate how climate fraud was commited. |Hushing up instrument Where 'global warming' is involved, it has become common practice not to report instrument failures, particularly where such faults produce lower temperature readings. The satellite that first ignited the fury is NOAA-16. But as we have since learned there are now five key satellites that have become either degraded or seriously compromised, resulting in ridiculous temperature readings. Even though the Indian government was long ago onto these faults, researcher Devendra Singh tried and failed to draw attention to the increasing problems with the satellite as early as 2004 but his paper remained largely ignored outside of his native homeland. For at least five years and perhaps longer, NOAA National Climatic Data Centre (NCDC) has been hushing up the faults in their satellites , which is a cardinal sin for any scientist or scientific institute. The picture shows how the path scanned, failed to reproduce the landscape below, resulting in an erroneous stripy pattern, now known as barcode. The data was automatically fed into climate records. This scandal places the entire satellite record in doubt , and the use the IPCC made of it. Dr. Timothy Ball: “At best the entire incident indicates gross incompetence, at worst it indicates a deliberate attempt to create a temperature record that suits the political message of the day.” CO2insanity.com: link. climatechanedispatch.com link. The graph shows temperatures and their adjustments in Darwin (a smallish town), NW Australia. The blue curve is actual temperature which suffered a drop in 1940, thought to be 'unusual', but happening again around 1987. The average trend of the raw data shows 0.7 degrees cooling per century. After undocumented adjustments (black curve), the red curve was arrived at, showing warming of 1.2 degrees per century. This is a very blatant case of cooking the temperature, and many such cases have been documented from all over the world. For more information, visit http://climateaudit.org/. |Upward adjustment of all raw Steven Goddard discovered that all US temperatures have been gradually adjusted upward by a whopping 0.5ºF without appropriate documentation. The reasoning behind this adjustment was entirely arbitrary: "many sites were relocated from city locations to airports and from roof tops to grassy areas. This often resulted in cooler readings than were observed at the previous sites." The graph shows the difference between what the thermometers read (RAW data), and the temperatures corrected by the USHCN. One would have expected that adjustments canceled one another out as thermometers are relocated. Could one call this fraud? |This table is from the 7 important temperature stations of New Zealand, showing raw and adjusted trends. Averaging the unadjusted trends arrives at +0.08ºC per century, but after adjustment, the trend becomes +0.59ºC per century. The New Zealand temperature database is managed and kept by NIWA who have not been able to explain the adjustments, since the culprit, Jim Salinger has left. For more details see http://www.climatescience.org.nz/ who are fighting for the truth.See also an overview with links: http://wattsupwiththat.com/2012/03/07/the-cold-kiwi-comes-home-to-roost/| graph shown here of unadjusted (green) and adjusted (red) temperatures shows the degree of fraud involved.One cannot believe that there are other scientists willing to defend this fraud. UPDATE 8 Oct 2010: the High Court has decided that the 'adjusted' temperature data could not be used as an official record, and NIWA has also distantiated itself: NIWA now denies there was any such thing as an “official” NZ Temperature Record, and "NZ authorities, formally stated that, in their opinion, they are not required to use the best available information nor to apply the best scientific practices and techniques available at any given time. They don’t think that forms any part of their statutory obligation to pursue 'excellence'.” - what a mess, what a defeat for 'science'. link. Please note that NZ temperatures have a large influence on the 'world average' because there exist very few thermometers in the Southern Ocean. The NZ temperatures are then 'extrapolated' over a very large area. Ira Glickstein (2011): The PAST is Not What it Used to Be (GW Tiger Tale). http://wattsupwiththat.com/2011/01/16/the-past-is-not-what-it-used-to-be-gw-tiger-tale/ |Rise and fall in thermometers This graph shows annual mean temperature (magenta) and the number of thermometers taking part (dark blue). Thermometers were sparse before the Industrial Revolution (1850) but gradually rose in numbers, mainly in industrialised nations. After 1980 most were deselected in favour of automated thermometers. Note how temperatures jumped, first when thermometer numbers jumped up, and again when they jumped down. is a detailed view of average temperature and thermometer numbers after 1950. Note how average temperature suddenly began to look like a hockey stick. How did they do this? Mainly by promoting thermometers from warm places and demoting those from higher altitudes and remote rural places. And in the United States, Anthony Watts - in a volunteer survey of over 1000 of the 1221 instrument stations - had found 89% were poorly or very poorly sited, using NOAA’s own criteria. This resulted in a warm bias of over 1ºC. A warm contamination of up to 50% has been shown by no less than a dozen peer review papers including ironically one by Tom Karl (1988), director of NOAA’s NCDC and another by the CRU’s Phil Jones (2009). (Tom Karl and Phil Jones are at the centre of the Climategate scandal) |Urbanisation by selection Joseph D’Aleo (2009): Response to Gavin Schmidt on the Integrity of the Global Data Bases - |Selecting warmer sites This diagram from above shows how over time, more warmer stations were selected. Horizontal is time, over one century, and vertical average latitude, the distance to the equator. The curve represents the average latitude of the temperature stations used for calculating the world's temperature. One century ago, their average latitude was 35 degrees, but gradually over time, it changed to 20 degrees, with some inexplicable swings inbetween, as more southern stations were included and northern stations dropped off. Thus by design or by accident, more and more warmer thermometer stations were used and/or less and less those from colder places. The result gives substantial over-all warming. |More minimum records This graph shows that the minimum and maximum temperature readings went out of lock-step. Before 1920 their numbers were roughly equal, the maxes sometimes outnumbering the mins. But since 1930 things went wrong, and the minimum temperatures outnumbering the maximums, and since 1980 the maxes are in the majority again, and since 2000 vastly outnumbering the mins, at a time that the globe has been cooling. As a result the past was artificially cooled as the present was artificially warmed. Thus the average temperature has been doctored to fit the AGW hypothesis. Fudging the data in any way whatsoever is quite literally a sin against the holy ghost of science. I’m not religious, but I put it that way because I feel so strongly. It’s the one thing you do not ever do. You’ve got to have standards. - James Lovelock |Accidental data corruption In the year 2000, a most curious and massive jump occurred in the temperature data held by NASA, affecting 48 states in the USA. It was not detected by the data keepers but by an attentive outsider, Steve McIntyre. The IPCC was over the moon with this sudden demonstration of catastrophic warming, but when it was exposed as a year-2000 bug in the programs, the correction was quietly made and hushed up. No longer was 1998 the warmest year on record, as was trumpeted around the world. The important lesson is that outsiders are needed to keep a watchful eye on all intended and unintended data corruptions. Important to note is also that keeping temperature data is not just a question of storing, but that there are massive computer programs at work massaging and adjusting this data, which then becomes 'available' to the public as 'raw' data. What these programs do, has not been documented and made public. It may take decades before the mess has been sorted out. http://climateaudit.org/2010/01/23/nasa-hide-this-after-jim-checks-it/ - you could not have imagined this. Essential reading. “Anyone who doesn't take truth seriously in small matters cannot be trusted in large ones either.” - Albert Einstein Investigators Joe D’Aleo and Anthony Watts reported the following shortcomings in the temperature records :
http://www.seafriends.org.nz/issues/global/climate3.htm
13
70
The term border states refers to the five slave states of Delaware, Kentucky, Maryland, Missouri, and West Virginia which bordered a free state and aligned with the Union during the American Civil War. A slave state was a US state in which Slavery of African Americans was legal Delaware ( is a state located on the Atlantic Coast in the Mid-Atlantic region of the United States. The Commonwealth of Kentucky ( is a state located in the East Central United States of America. Missouri ( or) is a state in the Midwestern region of the United States bordered by Iowa, Illinois, Kentucky, Tennessee West Virginia ( is a state in the Appalachian Upland South, and Mid-Atlantic regions of the United States, bordered by The free states of the United States existed in opposition to the Slave states prior to the American Civil War. During the American Civil War, the Union was a name used to refer to the federal government of the United States, which was supported by the twenty-three Causes of the war See also Origins of the American Civil War, Timeline of events leading to the American Civil War The coexistence of a slave-owning South All but Delaware share borders with states that joined the Confederacy. The Confederate States of America (also called the Confederacy, the Confederate States, and CSA) formed as the government set up from 1861 In Kentucky and Missouri there were both pro-Confederate and pro-Union government factions. Though every slave state (except South Carolina) contributed some troops to the Union side, the split was most severe in these border states, with men from the same family often fighting on opposite sides. West Virginia was formed in 1863 from the northwestern counties of Virginia that had seceded from Virginia after Virginia seceded from the Union. West Virginia ( is a state in the Appalachian Upland South, and Mid-Atlantic regions of the United States, bordered by Secession (derived from the Latin term secessio is the act of withdrawing from an organization union or especially a political entity In the cases of Kentucky and Missouri, the states had two state governments during the Civil War, one supporting the Confederacy and one supporting the Union. In addition, two territories not yet states—the Indian Territory (now the state of Oklahoma), and the New Mexico Territory (now the states of Arizona and New Mexico)—also permitted slavery. The Indian Territory, also known as The Indian Country, The Indian territory or the Indian territories, was land set aside within the United States Oklahoma ( is a state located in the South Central region of the United States of America. The Territory of New Mexico became an Organized territory of the United States on September 9, 1850, and it existed until New Mexico The State of Arizona ( is a state located in the southwestern region of the United States. New Mexico ( is a state located in the southwestern region of the United States of America. Yet very few slaves could actually be found in these territories, despite the institution's legal status there. During the war, the major Indian tribes in Oklahoma signed an alliance with the Confederacy and participated in its military efforts. Native Americans in the United States are the indigenous peoples from the regions of North America now encompassed by the continental United States Residents of New Mexico Territory were of divided loyalties; the region was split between the Union and Confederacy at the 34th Parallel. Oklahoma is often cited as a "border state" today, but Arizona and New Mexico are rarely, if ever, so characterized. With geographic, social, political, and economic connections to both the North and the South, the border states were critical to the outcome of the war and still delineate the cultural border that separates the North from the South. After Reconstruction, most of the border states adopted Jim Crow laws resembling those enacted in the South, but in recent decades some of them (most notably Delaware and Maryland) have become more Northern in their political, economic, and social orientation, while others (particularly Kentucky and West Virginia) have adopted a Southern way of life. The Jim Crow laws were state and local laws enacted primarily but not exclusively in the Southern and border states of the United States between 1876 and 1965 Telsur Southern Dialect Regional Map Lincoln's 1863 Emancipation Proclamation, designed as a war-measures act, applied only to territories not already under Union control, so it did not apply to the border states. The Emancipation Proclamation consists of two executive orders issued by United States President Abraham Lincoln during the American Civil War. Maryland, Missouri, and West Virginia each changed their state constitution to prohibit slavery. Slavery in Kentucky and Delaware (as well as remnants of slavery in West Virginia and New Jersey) was not ended until the 1865 ratification of the Thirteenth Amendment. The Thirteenth Amendment to the United States Constitution officially abolished and continues to prohibit Slavery, and with limited exceptions such as those Both houses of Delaware's General Assembly rejected secession overwhelmingly, the House of Representatives unanimously. The Delaware General Assembly is the legislature of the US state of Delaware. The Maryland Legislature rejected secession in 1861, Governor Hicks voted against it. See also American Civil War, Origins of the American Civil War, Heart of the Civil War Heritage Area Maryland, a Slave state, was As a result of the Union Army's heavy presence in the state and the suspension of habeas corpus by Abraham Lincoln, several Maryland state legislators, as well as the mayor and police chief of Baltimore, who supported the secession, were arrested and imprisoned by Union authorities. The Union Army was the army that fought for the Union during the American Civil War. Habeas corpus (ˈheɪbiəs ˈkɔɹpəs ( Latin: command that you have the body is the name of a legal action or Writ, through which a person can seek relief Abraham Lincoln (February 12 1809 &ndash April 15 1865 the sixteenth President of the United States, successfully led his country through its greatest internal A mayor (from the Latin māior, meaning "greater" is a modern title used in many countries for the highest ranking officer in a municipal government (Notice that with Virginia having seceded, Union troops had to go through Maryland to reach the national capital at Washington DC) Had Maryland also joined the Confederacy, Washington DC would have been totally surrounded. Washington DC ( formally the District of Columbia and commonly referred to as Washington, the District, or simply D Maryland contributed troops to both the Union (60,000) and the Confederate (25,000) armies. Maryland was not covered by the 1863 Emancipation Proclamation. The Emancipation Proclamation consists of two executive orders issued by United States President Abraham Lincoln during the American Civil War. Maryland adopted a new state constitution in 1864, which prohibited slavery and thus emancipated all slaves in the state. Kentucky was strategic to Union victory in the Civil War. Kentucky was a border state of key importance in the American Civil War. Lincoln once said, "I think to lose Kentucky is nearly the same as to lose the whole game. Kentucky gone, we cannot hold Missouri, nor Maryland. These all against us, and the job on our hands is too large for us. We would as well consent to separation at once, including the surrender of this capital" (Washington, which was surrounded by slave states: Confederate Virginia and Union-controlled Maryland). He is further reported to have said that he hoped to have God on his side, but he had to have Kentucky. Kentucky did not secede, but a faction known as the Russellville Convention formed a Confederate government of Kentucky which was recognized by the Confederate States of America as a member state. The Confederate government of Kentucky was a Shadow government established for the Commonwealth of Kentucky by a self-constituted group of Southern The Confederate government of Kentucky was a Shadow government established for the Commonwealth of Kentucky by a self-constituted group of Southern Kentucky was represented by the central star on the Confederate battle flag. The Confederate States of America used several Flags during its existence from 1861 to 1865 Kentucky Governor Beriah Magoffin proposed that slave states like Kentucky should conform to the US Constitution and remain in the Union. Beriah Magoffin ( April 18, 1815 February 28, 1885) was the Governor of Kentucky from 1859 to 1862 The Constitution of the United States of America is the supreme Law of the United States. When Lincoln requested 75,000 men to serve in the Union, however; Magoffin, a Southern sympathizer, countered that Kentucky would "furnish no troops for the wicked purpose of subduing her sister Southern states. " Kentucky tried to remain neutral, even issuing a proclamation May 20, 1861, asking both sides to keep out. Events 325 - The First Council of Nicaea &ndash the first Ecumenical Council of the Christian Church is held Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common The neutrality was broken when Confederate General Leonidas Polk occupied Columbus, Kentucky, in the summer of 1861, though the Union had been openly enlisting troops in the state before this. Leonidas Polk ( April 10, 1806 &ndash June 14, 1864) was a Confederate general who was once a planter in Maury County Tennessee Columbus is a city in Hickman County, Kentucky, United States. In response, the Kentucky Legislature passed a resolution directing the governor to demand the evacuation of Confederate forces from Kentucky soil. Magoffin vetoed the proclamation, but the legislature overrode his veto. A veto, Latin for "I forbid" is used to Denote that a certain party has the right to stop unilaterally a certain piece of Legislation. The legislature further decided to back General Ulysses S. Grant and his Union troops stationed in Paducah, Kentucky, on the grounds that the Confederacy voided the original pledge by entering Kentucky first. Ulysses S Grant, born Hiram Ulysses Grant (April 27 1822 &ndash July 23 1885 was an American general and the eighteenth President of the United States Paducah is the largest city in Kentucky 's Jackson Purchase Region and the County seat of McCracken County, Kentucky, United States Southern sympathizers were outraged at the legislature's decisions, citing that Polk's troops in Kentucky were only in route to countering Grant's forces. Later legislative resolutions—such as inviting Union General Robert Anderson to enroll volunteers to expel the Confederate forces, requesting the governor to call out the militia, and appointing Union General Thomas L. Crittenden in command of Kentucky forces—only incensed the Southerners further. Robert Anderson is the name of a number of people in various fields Arts and entertainment Robert Anderson (author (1750&ndash1830 Scottish literary Thomas Leonidas Crittenden ( May 15, 1819 &ndash October 23, 1893) was a lawyer politician and Union general during the American (Magoffin vetoed the resolutions but all were overridden. ) In 1862, the legislature passed an act to disfranchise citizens who enlisted in the Confederate States Army. The War Department was established by the Confederate Congress in an act on February 21, 1861. Thus Kentucky's neutral status evolved into backing the Union. Most of those who originally sought neutrality, turned to the Union cause. When Confederate General Albert Sidney Johnston occupied Bowling Green, Kentucky in the summer of 1861, the pro-Confederates in western and central Kentucky moved to establish a Confederate state government. Bowling Green is the fourth-most populous City in the US state of Kentucky after Louisville, Lexington and Owensboro The Russellville Convention met in Logan County on November 18, 1861. The Confederate government of Kentucky was a Shadow government established for the Commonwealth of Kentucky by a self-constituted group of Southern Logan County is a County located in the southwest area of the U Events 326 - The old St Peter's Basilica is consecrated 1302 - Pope Boniface VIII issues the Papal bull Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common One hundred sixteen delegates from 68 counties elected to depose the current government and create a provisional government loyal to Kentucky's new unofficial Confederate Governor George W. Johnson. A provisional government is an emergency or interim government set up when a political void has been created by the collapse of a previous administration or regime George Washington Johnson (May 27 1811 April 8 1862 was the first Confederate governor of Kentucky. On December 10, 1861, Kentucky became the 13th state admitted to the Confederacy. Events 1041 - Empress Zoe of Byzantium elevates her adoptive son to the throne of the Eastern Roman Empire as Michael V Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common Kentucky, along with Missouri, was a state with representatives in both Congresses and with regiments in both Union and Confederate armies. Magoffin, still functioning as official governor in Frankfort, would not recognize the Kentucky Confederates nor their attempts to establish a government in his state. Frankfort is a city in the US commonwealth of Kentucky that serves as the state Capital and the County seat of Franklin County. He continued to declare Kentucky's official status in the war was as a neutral state—even though the legislature backed the Union. Magoffin, fed up with the party divisions within the population and legislature, announced a special session of the legislature and then resigned his office in 1862. Bowling Green remained occupied by the Confederates until February 1862 when General Grant moved from Missouri through Kentucky along the Tennessee line. Confederate Governor Johnson fled Bowling Green with the Confederate state records, headed south, and joined Confederate forces in Tennessee. After Johnson was killed fighting in the Battle of Shiloh, Richard Hawes was named Confederate governor. Background and opposing forces After the losses of Fort Henry and Fort Donelson in February 1862 Confederate General Albert Sidney Johnston withdrew Richard Hawes (February 6 1797 – May 25 1877 was a United States Representative from Kentucky and the second Confederate Governor of Kentucky. Shortly afterwards, the Provisional Confederate Congress was adjourned on February 17, 1862, on the eve of inauguration of a permanent Congress. The Provisional Confederate Congress, for a time the legislative branch of the Confederate States of America, was the body which drafted the Confederate Constitution Events 1500 - Battle of Hemmingstedt. 1600 - Philosopher Giordano Bruno is burned alive at Campo de' Fiori Year 1862 was a Common year starting on Wednesday (link will display the full calendar of the Gregorian calendar (or a Common year starting on Monday However, as Union occupation henceforth dominated the state, the Kentucky Confederate government, as of 1863, existed only on paper, and its representation in the permanent congress was minimal. It was dissolved when the Civil War ended in the spring of 1865. After the secession of Southern states began, the newly elected governor of Missouri called upon the legislature to authorize a state constitutional convention on secession. Missouri in the Civil War was a border state that sent men generals and supplies to both opposing sides had its star on both flags had state governments A special election approved of the convention and delegates to it. This Missouri Constitutional Convention voted to remain within the Union, but rejected coercion of the Southern States by the United States. The Missouri Constitutional Convention (1861-63 was a constitutional convention in the American Civil War that decided that Missouri stay in the Union Pro-Southern Governor Claiborne F. Jackson was disappointed with the outcome. Claiborne Fox Jackson ( April 4, 1806 December 6, 1862) was a lawyer soldier politician He called up the state militia to their districts for annual training. Jackson had plans on the St. Louis Arsenal and had been in secret correspondence with Confederate President Jefferson Davis to obtain artillery for the militia in St. Louis. The St Louis Arsenal is a large complex of military weapons and ammunition storage buildings owned by the United States Army in St Jefferson Finis Davis ( June 3, 1808 &ndash December 6, 1889) was an American politician who served as President of the Aware of these developments, Union Captain Nathaniel Lyon struck first, encircling the camp and forcing the state militia to surrender. Nathaniel Lyon ( July 14, 1818 &ndash August 10, 1861) was the first Union general to be killed in the American While marching the prisoners to the arsenal, a deadly riot erupted (the Camp Jackson Affair. The Camp Jackson Affair was an incident of Civil unrest in the American Civil War on May 10, 1861, when Union military forces ) These events caused greater Confederate support within the state. The already pro-Southern legislature passed the governor's military bill creating the Missouri State Guard. The Missouri State Guard (MSG was a state Militia organized in the state of Missouri during the early days of the American Civil War. Governor Jackson appointed Sterling Price, who had been president of the convention, as major general of this reformed and expanded militia. Sterling Price ( September 20, 1809 September 29, 1867) was a lawyer politician and Militia General from the Major General or Major-General is a Military rank used in many countries Price and Union district commander Harney came to an agreement known as the Price-Harney Truce that calmed tensions in the state for several weeks. The Price-Harney Truce was a document signed on May 21, 1861 between United States Army General William S After Harney was removed and Lyon placed in charge, a meeting was held in St. Louis at the Planters' House between Lyon, his political ally Francis P. Blair, Jr., Price, and Jackson. Francis Preston Blair Jr ( February 19, 1821 &ndash July 9, 1875) was an American politician and Union Army general during The negotiations went nowhere and after a few fruitless hours Lyon made his famous declaration, "this means war!" Price and Jackson rapidly departed for the capital. Jackson, Price, and the state legislature were forced to flee the state capital of Jefferson City on June 14, 1861, in the face of Lyon's rapid advance against the state government. Events 1276 - While taking exile in Fuzhou in southern China, away from the advancing Mongol invaders, the remnants of the Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common In the absence of the now exiled state government, the Missouri Constitutional Convention reconvened in late July. The Missouri Constitutional Convention (1861-63 was a constitutional convention in the American Civil War that decided that Missouri stay in the Union On July 30 the convention declared the state offices vacant and appointed a new provisional government with Hamilton Gamble as governor. Events 1419 - First Defenestration of Prague. 1502 - Christopher Columbus lands at Guanaja in the Bay Islands off Hamilton Rowan Gamble ( November 26, 1798 January 31, 1864) was the Chief justice of the Missouri Supreme Court who issued President Lincoln's Administration immediately recognized Gamble's government as the legal government, which provided both pro-Union militia forces for service within the state and volunteer regiments for the Union Army. Fighting ensued between Union forces and a combined army of General Price's Missouri State Guard and Confederate troops from Arkansas and Texas under General Ben McCulloch. Arkansas ( is a state located in the southern region of the United States. Texas ( is a state geographically located in the South Central United States and is also known as the Lone Star State. Benjamin McCulloch (November 11 1811&ndashMarch 7 1862 was a soldier in the Texas Revolution, a Texas Ranger, a U After winning victories at the battle of Wilson's Creek and the siege of Lexington, Missouri, the secessionist forces had little choice but to retreat again to Southwest Missouri as Union reinforcements arrived. Background At the beginning of the war Missouri declared that it would be an " Armed neutral " in the conflict and not send materials or men to The First Battle of Lexington also known as the Battle of the Hemp Bales was an engagement of the American Civil War, occurring from September 13 There, on October 30, 1861 in the town of Neosho, Jackson called the exiled state legislature into session where they enacted a secession ordinance. Events 637 - Antioch surrenders to the Muslim forces under Rashidun Caliphate after the Battle of Iron bridge. Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common Neosho is a city in and the County seat of Newton County, Missouri, United States. The Missouri Secession controversy refers to the disputed status of the state of Missouri during the American Civil War. It was recognized by the Confederate congress, and Missouri was admitted into the Confederacy on November 28. For the town in Argentina, see 28 de Noviembre. Events The exiled state government was forced to withdraw into Arkansas in the face of a largely reinforced Union Army. Though regular Confederate troops staged several large-scale raids into Missouri, the fighting in the state for the next three years consisted mainly of guerrilla warfare. Guerrilla warfare is the unconventional warfare and combat with which a small group of combatants use mobile tactics (ambushes raids etc The guerrillas were primarily southern partisans including William Quantrill, Frank and Jesse James, the Younger brothers, and William T. Anderson. William Clarke Quantrill ( July 31 1837 &ndash June 6 1865) was a Confederate guerrilla leader during the American Alexander Franklin James ( January 10, 1843 &ndash February 18, 1915) was an American Outlaw and older brother of Jesse Woodson James (September 5 1847—April 3 1882 was an American Outlaw in the border state of Missouri and the most famous member of the The James-Younger Gang was a legendary 19th century Gang of American Outlaws that included Jesse James. William T Anderson aka " Bloody Bill " (1839&ndashOctober 26 1864 was a pro- Confederate guerrilla leader in the American Civil War Such small unit tactics pioneered by the Missouri Partisan Rangers were seen in other occupied portions of the Confederacy during the Civil War. The James' brothers outlawry after the war has been seen as a continuation of guerrilla warfare. Governor Thomas C. Fletcher ended slavery in Missouri on January 11, 1865, by executive proclamation. Thomas Clement Fletcher ( January 21, 1827 March 25, 1899) was the Governor of Missouri during the latter stages of the American Events 1055 - Theodora is crowned Empress of the Byzantine Empire. Year 1865 ( MDCCCLXV) was a Common year starting on Sunday (link will display the full calendar of the Gregorian calendar (or a Common year The serious divisions between the western and eastern sections of Virginia did not begin in the winter of 1860-1861. West Virginia was formed and added to the Union as a direct result of the American Civil War (see History of West Virginia) West Virginia historian C. H. Ambler wrote that “there are few years during the period from 1830 to 1850 which did not bring forth schemes for the dismemberment of the commonwealth. ” The western part of the state during this time was “the growing and aggressive section” while the east was “the declining and conservative one. ” The west centered its grievances on the east’s disproportionate (based on population) legislative representation and share of state revenues. The east justified this dominance because of its dependence on slaves, “the possession of which could be guaranteed and secured only by giving to masters a voice in the government adequate to the protection of their interests. ” In 1851 the Virginia Reform Convention, forced to recognize that the white population of the western part of the state outnumbered the east, made significant changes. Universal white suffrage was granted and the governor was to be determined by the direct vote of the people. The lower house of the legislature was apportioned strictly based on population, although the upper house still used a combination of population and property in determining its electoral districts. By 1859 there were again strong sectional tensions at work within the state, although the west itself was split between the north and the south, with the south more satisfied with the changes made in 1851. Historian Daniel W. Crofts wrote, “Northwesterners complained that they had become ‘the complete vassals of Eastern Virginia,’ taxed ‘unmercifully and increasingly, at her instance and for her benefit. ’” Internal improvements important to the west, such as the James River and Kanawha Canal or railroads connecting the west to the east had been promised but not built. The James River and Kanawha Canal was a Canal in Virginia, which was built to facilitate shipments of passengers and freight by water between the western counties Slaves, for tax purposes, were not valued above $300 despite a top field hand being worth five times that amount. The west had 135,000 more whites than the east, but the east controlled the state Senate. In the United States House of Representatives, because of the three-fifth rule, only five of Virginia’s thirteen representatives came from western districts. The Three-Fifths Compromise was a compromise between Southern and Northern states reached during the Philadelphia Convention of 1787 in which In the 1859 gubernatorial elections there was disenchantment with both parties in the west. Western grievances were ignored as “both parties engaged in a proslavery shouting match. ” Antislavery Whigs began to move towards the Republican Party; in the 1860 presidential election, Abraham Lincoln received 2,000 votes from the western panhandle. Crofts wrote that “no document better captures the mood of unconditional northwestern Virginia Unionists” than the following from a March 16, 1861 letter by Henry Dering of Morgantown to Waitman T. Willey: Talk about Northern oppression, talk about our rights being stolen from us by the North – it’s all stuff, and dwindles into nothing when compared, to our situation in Western Virginia. Events 597 BC - Babylonians capture Jerusalem, replace Jehoiachin with Zedekiah as king Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common Waitman Thomas Willey ( October 18, 1811 &ndash May 2, 1900) was an American lawyer and politician from Morgantown West The truth is the slavery oligarchy, are impudent boastful and tyrannical, it is the nature of the institution to make men so – and tho I am far, from being an abolitionist, yet if they persist, in their course, the day may come, when all Western Virginia will rise up, in her might and throw off the Shackles, which thro this very Divine institution, as they call it, has been pressing us down. By December 1860 secession was being publicly debated throughout Virginia. Leading eastern newspapers such as the Richmond Inquirer, Richmond Examiner, and Norfolk Argus were openly calling for secession. The Wellsburg Herald on December 14 warned the east that the west would not be “legislated into treason or dragged into trouble to gratify the wishes of any set of men, or to subserve the interests of any section. Events 1287 - St Lucia's flood: The Zuider Zee sea wall in the Netherlands collapses killing over 50000 people ” The Morgantown Star on January 12 said that their region was “unwilling that slavery in Virginia shall be used to oppress the people of our section of the state. Events 475 - Basiliscus becomes Byzantine Emperor, with a coronation ceremony in the Hebdomon palace in Constantinople . . . We people in Western Virginia have borne the burden just about as long as we can stand it. We have been ‘hewers of wood and drawers of water’ for Eastern Virginia long enough. ” In addition to traditional east- west differences, the specter of secession raised new issues for the northwest. This section shared a 450-mile (720 km) border with Ohio and Pennsylvania and, by virtue of the state’s failure to build roads, was isolated from the rest of the state. A leading unionist said, “We would be swept by the enemy from the face of the earth before the news of the attack could reach our Eastern friends. ” Another unionist, addressing the section’s close economic links with the North, asked, “Would you have us . . . act like madmen and cut our own throats merely to sustain you in a most unwarrantable rebellion. ” Despite unionist opposition, a special session of the state legislature in early January called for the election of delegates to a state convention on February 4 to consider secession. Events 211 - Roman Emperor Septimius Severus dies leaving the Roman Empire in the hands of his two quarrelsome sons A proposal by Waitman T. Willey to have the convention also consider reforms to taxation and representation went nowhere. The convention first met on February 13 and voted for secession on April 17, 1861. Events 1258 - Baghdad falls to the Mongols, and the Abbasid Caliphate is destroyed Events 69 - After the First Battle of Bedriacum, Vitellius becomes Roman Emperor. Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common The decision was dependent on ratification by a statewide referendum. On April 22, 1861 John S. Events 1500 - Portuguese Navigator Pedro Álvares Cabral becomes the first European to sight Brazil. Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common Carlisle led a meeting of 1,200 people in Harrison County. The meeting approved the “Clarksburg Resolutions”, calling for the creation of a new state separate from Virginia. The resolutions were widely circulated and each county was asked to choose five “of their wisest, best, and distinguished men” as delegates. Historian Allan Nevins wrote, “ The movement, spontaneous, full of extralegal irregularities, and varying from place to place, spread like the wind. Community after community held mass meetings. ” Unionists in Virginia met at the Wheeling Convention from May 13 to May 15 to await the decision of the state referendum called to ratify the decision to secede. The 1861 Wheeling Convention was held at West Virginia Independence Hall in Wheeling. Events 1497 - Pope Alexander VI excommunicates Girolamo Savonarola. Events 1252 - Pope Innocent IV issues the Papal bull Ad exstirpanda, which authorizes but also limits the In attendance were over four hundred delegates from twenty-seven counties. Most delegations were chosen by public meetings rather than elections and some attendees came strictly on their own. The editor of the Wheeling Western Star called it “almost a mass meeting of the people instead of a representative body. ” Carlisle, in front of a banner proclaiming “New Virginia, now or never”, spoke for the immediate creation of a new state consisting of thirty-two counties. Speaking of the actions of the Virginia secession convention, he said, “Let us act; let us repudiate these monstrous usurpations; let us show our loyalty to Virginia and the Union at every hazard. It is useless to cry peace when there is no peace; and I for one will repeat what was said by one of Virginia’s noblest sons and greatest statesmen, ‘Give me liberty or give me death. ’ Speaking in opposition to action at this time, Willey argued that the convention had no authority to take such an action and referred to it as “triple treason”. Francis H. Pierpont supported Willey and helped to work out a compromise that secured the withdrawal of the Carlisle motion, declared the state’s Ordinance of Secession to be “unconstitutional, null, and void", and called for a second convention on June 11 if secession was ratified. Events 1184 BC - Trojan War: Troy is sacked and burned according to the calculations of Eratosthenes. Willey’s closing remarks to the convention set the stage for the June meeting: Fellow citizens, the first thing we have got to fight is the Ordinance of Secession. Let us kill it on the 23rd of this month. Let us bury it deep within the hills of Northwestern Virginia. Let us pile up our glorious hills on it; bury it deep so that it will never make its appearance among us again. Let us go back home and vote, even if we are beaten upon the final result, for the benefit of the moral influence of that vote. If we give something like a decided . . . majority in the Northwest, that alone secures our rights. That alone, at least secures at independent State if we desire it. The statewide vote in favor of secession was 132,201 to 37,451. In the core Unionist enclave of northwestern Virginia the vote was 30,586 to 10,021 against secession, although the total vote in the counties that would become West Virginia was a closer 34,677 to 19,121 against. The Second Wheeling Convention opened on June 11 with more than 100 delegates from 32 western counties representing nearly one-third of Virginia’s total voting population. Events 1184 BC - Trojan War: Troy is sacked and burned according to the calculations of Eratosthenes. Members of the Virginia General Assembly were accepted as long as they were loyal to the Union "and still others were seemingly self-appointed. " The convention met “ in open defiance of the Richmond authorities” and efforts were made in many counties to restrict attendance. Delegates were required to take a loyalty oath to the United State Constitution “anything in the Ordinance of the Convention which assembled in Richmond, on 13 February last, to the contrary notwithstanding. Events 1258 - Baghdad falls to the Mongols, and the Abbasid Caliphate is destroyed ”. Arthur I. Boreman, the future governor of West Virginia, was chosen as president, but the main leaders were Carlisle and Frank Pierpont. Arthur Ingram Boreman ( July 24, 1823 &ndash April 19, 1896) was the first governor of the U While many still supported Carlisle’s original plan to create a new state, Article IV Section 3 of the Constitution presented a problem. This section guaranteed that “no new State shall be formed or erected within the Jurisdiction of ay their State . . . without the Consent of the Legislatures of the States Concerned as well as of Congress. ” The legal solution chosen by the convention is described by author W. Hunter Lesser:: A new Virginia government would be created. All state offices would be declared vacant, the traitors thrown out by proxy and Union men appointed in their place. Loyal Unionists would claim the political framework of a state already recognized by the Federal government – thereby courting favor with a Lincoln administration not anxious to deal with the Rebels. Lincoln himself held the constitutional authority to determine which of two competing parties was the lawful state government. An 1849 Supreme Court case in Rhode Island – Luther vs. Borden – had set the precedent. This restored Virginia government would then, under this theory, have the authority to the creation of a new state within the Old Dominion’s old borders. On June 13 Carlisle presented his “Declaration of Rights of the People of Virginia” to the convention. Events 1525 - Martin Luther marries Katharina von Bora, against the Celibacy rule decreed by the Roman Catholic Church for It accused the secessionists of “usurping” the rights of the people, creating an “illegal confederacy of rebellious states”, and declared it was now their duty “to abolish” the state government as it existed. The convention approved this declaration on June 17 by a 56 to 0 vote. Events 1462 - Vlad III the Impaler attempts to assassinate Mehmed II ( The Night Attack) forcing him to retreat On June 14 “An Ordinance fo the Re-organization of the State Government” was presented which provided for the selection of a governor, lieutenant governor, and a five-member governor’s council by the convention, declared all state government offices vacant, and recognized a “rump legislature” composed of loyal members of the General Assembly who had been elected in the May 23 statewide voting. Events 1276 - While taking exile in Fuzhou in southern China, away from the advancing Mongol invaders, the remnants of the Events 1430 - Siege of Compiègne: Joan of Arc is captured by the Burgundians while leading an army to relieve Compiègne This ordinance was approved on June 19. Events 1179 - The Norwegian Battle of Kalvskinnet outside Nidaros. Francis H. Peirpont was chosen as governor by the convention on June 20. Events 451 - Battle of Chalons: Flavius Aetius ' defeats Attila the Hun. Historian Virgil Lewis said this process was carried out in an “irregular. . . unjustifiable mode. ” The next day Governor Peirpont notified President Lincoln of the convention’s decisions. Noting that there were “evil-minded persons” who were “making war on the loyal people of the state” and “pressing citizens against their consent into their military organization and seizing and appropriating their property to aid in the rebellion,” Peirpont requested aid “to suppress such rebellion and violence. ” Secretary of War Cameron, replying for Lincoln, wrote: The President . . . never supposed that a brave and free people, though surprised and unarmed, could long be subjugated by a class of political adventurers always adverse to them, and the fact that they have already rallied, reorganized their government, and checked the march of these invaders demonstrates how justly he appreciated them. The Restored Government of Virginia granted permission for the formation of a new state on August 20, 1861. The Restored government of Virginia was the Unionist government of Virginia during the Civil War Events 636 - Battle of Yarmouk: Arab forces led by Khalid ibn al-Walid take control of Syria and Palestine Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common The Lt. Governor of the Restored Government, Daniel Polsley, strongly objected to the ordinance for the new state, saying in a speech on August 16: If they proceeded now to direct a division of the State before a free expression on the people could be had, they would do a more despotic act than any done by the Richmond Convention itself. Daniel Haymond Polsley ( November 28, 1803 &ndash October 14, 1877) was a nineteenth century politician lawyer judge and editor Events 1384 - The Hongwu Emperor of Ming China, Emperor Dong hears a case of a couple who tore paper money bills while fighting . . They now proposed a division when it was impossible for one-fourth of even the counties included in the boundaries proposed to give even an expression upon the proposition. The October 24, 1861 popular vote on the new state drew only 19,000 voters (compared to the 54,000 who had voted in the original secession referendum), one hundred of whom, according to two individual observers, were Ohio soldiers The Second Wheeling Convention had proposed that only 39 counties be included in the new state. Events 69 - Second Battle of Bedriacum, forces under Antonius Primus the commander of the Danube armies loyal to Vespasian, defeat Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common This number included 24 clearly Unionist counties and 15 pro-Confederate counties which the new state would find “imperative” because of their geographic relationship with the rest of the new state. These 39 counties contained a white population of 272,759, 78% of whom had a Unionist orientation. While there was overwhelming support at this convention for statehood, there was a “small, effective minority” that opposed this and they used “obstructionist tactics at every opportunity” in their efforts to defeat the majority. It was this group opposed to statehood that was largely responsible for the inclusion of additional counties beyond this core. When the constitutional convention was held in Wheeling on November 16, 1861, the obstructionists attempted to have 71 counties included in the new state, a move which would have created a white confederate sympathizer majority of 316,308. Events 534 - A second and final revision of the Codex Justinianus is published Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common Eventually a compromise was worked out to include 50 counties. Historian Richard O. Curry summed the results up this way: In conclusion, then, twenty-five of fifty counties encompassed by West Virginia supported the Confederacy and opposed dismemberment. The Rebel minority ran as high as 40 per cent in a few Union counties but the reverse was also true. Therefore, because northwestern Union counties contained 60 percent of the total population and the Confederate counties 40 per cent, a 60-40 ration, the majority being Unionists, would appear to be a fair estimate of the division of sentiment among the inhabitants included in the state of West Virginia. Curry further concluded: On the other hand – and this is important too – the West Virginia government did not coerce the unwilling counties of the Valley and the southwest; it made little or no attempt to exercise effective control over these Confederate counties until after the war. Never at any time during the war did the Pierpont government or the administration of Arthur I. Boreman, first governor of West Virginia, control more than half the counties in the state. While the above political events were unfolding, in the late spring of 1861 Union troops from Ohio moved into western Virginia with the primary strategic goal of protecting the B & O Railroad. General George B. McClellan in June 3 at Philippi, July 11 at Rich Mountain, and September 10 at Carnifex Ferry “completely destroyed Confederate defenses in western Virginia. George Brinton McClellan ( December 3 1826 October 29 1885) was a major general during the American Civil War. Events 350 - Roman usurper Nepotianus, of the Constantinian dynasty, proclaims himself Roman Emperor, entering Events 911 - Signing of the Treaty of Saint-Clair-sur-Epte between Charles the Simple and Rollo of Normandy. Events 506 - The Bishops of Visigothic Gaul meet in the Council of Agde. ” However after these victories most Federal troops were sent out of the new state to support McClellan elsewhere, leading Governor Boreman to write from Parkersburg "The whole country South and East of us is abandoned to the Southern Confederacy. " In central, southern and eastern West Virginia a guerrilla war ensued that lasted until 1865. Raids and recruitment by the Confederacy took place throughout the war. Estimates of Union and Confederate soldiers from West Virginia have varied widely, but some recent studies indicate that the numbers were about equal, from 22-25,000 each. Historian Richard Nelson Current places the number of West Virginians fighting for the Union at approximately 29,000. The new state constitution was passed by the Unionist counties in the spring of 1862 and this was approved by the restored Virginia government in May of 1862. The statehood bill for West Virginia was passed by Congress in December and signed by President Lincoln on December 31, 1862. Events 406 – Vandals, Alans and Suebians cross the Rhine, beginning an invasion of Gallia. Year 1862 was a Common year starting on Wednesday (link will display the full calendar of the Gregorian calendar (or a Common year starting on Monday As a condition for statehood the US Congress required that a policy of gradual emancipation be granted to the slaves of the new state, called the Willey Amendment, which was amended to the state constitution on March 26, 1863. Events 1026 - Pope John XIX crowns Conrad II as Holy Roman Emperor. Year 1863 ( MDCCCLXIII) was a Common year starting on Thursday (link will display the full calendar of the Gregorian calendar (or a Common Conventions at Mesilla, New Mexico, on March 18, 1861, and Tucson, Arizona, on March 23 adopted an ordinance of secession. Mesilla is also a spider genus ( Anyphaenidae) Mesilla is a town in Doña Ana County, New Mexico, United States Events 37 - The Roman Senate annuls Tiberius ' will and proclaims Caligula emperor Year 1861 ( MDCCCLXI) was a Common year starting on Tuesday (link will display the full calendar of the Gregorian calendar (or a Common Tucson (ˈtuːsɒn is the seat of Pima County Arizona, United States, located 118 miles (188 km) southeast Events 1174 - Jocelin, Abbot of Melrose, is elected Bishop of Glasgow. The conventions established a pro-Southern government for the southern portions of the territory and called for the election of representatives to petition the Confederacy for admission and relief. Lewis Owings of Mesilla was elected the territory's first provisional governor, and Granville Henderson Oury of Tucson presented the territory's petition for admission into the Confederacy. Dr Lewis S Owings was a medical doctor and politician in the New Mexico and Arizona territories Granville Henderson Oury ( March 12, 1825 &ndash January 11, 1891) was a nineteenth century politician lawyer judge and miner In July 1861, Confederate forces from Texas, under Lieutenant Colonel John Baylor, entered Mesilla, described as "a strongly pro-Confederate community. John Robert Baylor ( July 27, 1822 &ndash February 8, 1894) was a politician in Texas and a military officer of the Confederate " The following day, Union Major Isaac Lynde approached Mesilla to engage Baylor's forces. Baylor's men, accompanied by militia out of Mesilla, attacked and defeated Lynde at the Battle of Mesilla on July 27. The Battle of Mesilla was a Confederate victory at Mesilla New Mexico (the Confederate States of America's Arizona Territory) on July 25 Events 1214 - Battle of Bouvines: In France, Philip II of France defeats John of England. On August 1, Baylor proclaimed that the Confederate territory of Arizona would extend to the 34th parallel and named himself the new territorial governor. Events 30 BC - Octavian (later known as Augustus enters Alexandria, Egypt, bringing it under the control of the Roman The Arizona Territory of the Confederate States of America was an Organized territory of the Confederacy that existed between 1861 and 1865 The territory was home to several subsequent engagements and skirmishes between the western armies of the Union and the Confederacy during the war. The Confederate loss at the Battle of Glorieta Pass, in March 1862, drove them back to Texas and ended involvement of New Mexico in the Civil War. The Battle of Glorieta Pass, fought on 26-28 March 1862 in northern New Mexico Territory, was the decisive Battle of the New Mexico Campaign during Though Tennessee had officially seceded, East Tennessee was pro-Union and had mostly voted against secession. East Tennessee is a name given to approximately the eastern third of the state of Tennessee, one of Attempts to secede from Tennessee were suppressed by the Confederacy. Jefferson Davis arrested over 3,000 men suspected of being loyal to the Union and held them without trial. Tennessee came under control of Union forces in 1862 and was omitted from the Emancipation Proclamation. The Emancipation Proclamation consists of two executive orders issued by United States President Abraham Lincoln during the American Civil War. After the war, Tennessee was the first state to have its elected members readmitted to the US Congress. Winston County, Alabama, issued a resolution of secession from the state of Alabama. Winston County is a County of the US state of Alabama, formerly known as Hancock County before 1858 Alabama (formally the State of Alabama;) is a State located in the southern region of the United States of America. President Abraham Lincoln's Emancipation Proclamation was designed with the interests of border states in mind. The Emancipation Proclamation consists of two executive orders issued by United States President Abraham Lincoln during the American Civil War. The Proclamation did not free slaves within current Union-controlled territory because the presidential war power did not extend there. Lincoln maintained that under the Constitution, ending slavery in a state not in active rebellion against the Union could only be done legally by action of that state, or by amendment to the Constitution.
http://citizendia.org/Border_states_(Civil_War)
13
51
Science Fair Project Encyclopedia Coordinates (elementary mathematics) This article describes some of the common coordinate systems that appear in elementary mathematics. For advanced topics, please refer to coordinate system. For more background, see Cartesian coordinate system. The coordinates of a point are the components of a tuple of numbers used to represent the location of the point in the plane or space. A coordinate system is a plane or space where the origin and axes are defined so that coordinates can be measured. In the two-dimensional Cartesian coordinate system, a point P in the xy-plane is represent by a tuple of two components (x,y). - x is the signed distance from the y-axis to the point P, and - y is the signed distance from the x-axis to the point P. In the three-dimensional Cartesian coordinate system, a point P in the xyz-space is represent by a tuple of three components (x,y,z). - x is the signed distance from the yz-plane to the point P, - y is the signed distance from the xz-plane to the point P, and - z is the signed distance from the xy-plane to the point P. For advanced topics, please refer to Cartesian coordinate system. The term polar coordinates often refers to circular coordinates (two-dimensional). Other commonly used polar coordinates are cylindrical coordinates and spherical coordinates (both three-dimensional). The circular coordinate system, often referred to simply as the polar coordinate system, is a two-dimensional polar coordinate system, defined by an origin, O, and a semi-infinite line L leading from this point. L is also called the polar axis. In terms of the Cartesian coordinate system, one usually picks O to be the origin (0,0) and L to be the positive x-axis (the right half of the x-axis). In the circular coordinate system, a point P is represented by a tuple of two components (r,θ). Using terms of the Cartesian coordinate system, - (radius) is the distance from the origin to the point P, and - (azimuth) is the angle between the positive x-axis and the line from the origin to the point P. The cylindrical coordinate system is a three-dimensional polar coordinate system. In the cylindrical coordinate system, a point P is represented by a tuple of three components (r,θ,h). Using terms of the Cartesian coordinate system, - (radius) is the distance between the z-axis and the point P, - (azimuth or longitude) is the angle between the positive x-axis and the line from the origin to the point P projected onto the xy-plane, and - h (height) is the signed distance from xy-plane to the point P. - Note: some sources use z for h; there is no "right" or "wrong" convention, but it is necessary to be aware of the convention being used. Cylindrical coordinates involve some redundancy; θ loses its significance if r = 0. Cylindrical coordinates are useful in analyzing systems that are symmetrical about an axis. For example the infinitely long cylinder that has the Cartesian equation x2 + y2 = c2 has the very simple equation r = c in cylindrical coordinates. The spherical coordinate system is a three-dimensional polar coordinate system. In the spherical coordinate system, a point P is represented by a tuple of three components (ρ,φ,θ). Using terms of the Cartesian coordinate system, - (radius) is the distance between the point P and the origin, - (colatitude or polar angle) is the angle between the z-axis and the line from the origin to the point P, and - (azimuth or longitude) is the angle between the positive x-axis and the line from the origin to the point P projected onto the xy-plane. NB: The above convention is the standard used by American mathematicians and American calculus textbooks. However, most physicists, engineers, and non-American mathematicians interchange the symbols φ and θ above, using φ to denote the azimuth and θ the colatitude. One should be very careful to note which convention is being used by a particular author. It should be noted that, regardless of how one labels the coordinates, one argument against the conventional American mathematical definition is the fact that it produces a left-handed coordinate system, rather than the usual convention of a right-handed coordinate system. The spherical coordinate system also involves some redundancy; φ loses its significance if ρ = 0, and θ loses its significance if ρ = 0 or φ = 0 or . To construct a point from its spherical coordinates: from the origin, go ρ along the positive z-axis, rotate φ about y-axis toward the direction of the positive x-axis, and rotate θ about the z-axis toward the direction of the positive y-axis. Spherical coordinates are useful in analyzing systems that are symmetrical about a point; a sphere that has the Cartesian equation x2 + y2 + z2 = c2 has the very simple equation ρ = c in spherical coordinates. Spherical coordinates are the natural coordinates for physical situations where there is spherical symmetry. In such a situation, one can describe waves using spherical harmonics. Another application is ergonomic design, where ρ is the arm length of a stationary person and the angles describe the direction of the arm as it reaches out. The concept of spherical coordinates can be extended to higher dimensional spaces and are then referred to as hyperspherical coordinates. See also: Celestial coordinate system Conversion between coordinate systems Cartesian and circular where u0 is the Heaviside step function with u0(0) = 0 and sgn is the signum function. Here the u0 and sgn functions are being used as "logical" switches which are used as shorthand substitutes for several if ... then statements. Some computer languages include a bivariate arctangent function atan2(y,x) which finds the value for θ in the correct quadrant given x and y. Cartesian and cylindrical Cartesian and spherical Cylindrical and spherical - Coordinate system - Cartesian coordinate system - Parabolic coordinate system - Curvilinear coordinates - Coordinate rotation - Vector (spatial) - Vector fields in cylindrical and spherical coordinates - Nabla in cylindrical and spherical coordinates - For spherical coordinates: - Credit to original articles: - Frank Wattenberg has made some nice animations illustrating spherical and cylindrical coordinate systems. - http://www.physics.oregonstate.edu/bridge/papers/spherical.pdf is a description of the different conventions in use for naming components of spherical coordinates, along with a proposal for standardizing this. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Spherical_coordinates
13
57
The continuity equation deals with changes in the area of cross-sections of passages which fluids flow through. Laminar flow is flow of fluids that doesn't depend on time, ideal fluid flow. The formula for continuity equation is density 1 x area 1 x volume 1 = density 2 x area 2 volume 2. Let's talk about the continuity equation, the continuity equation rather than being associated with what happens when fluids are at rest is actually associated with what happens when fluids are moving. We have fluid flow, now we're going to be interested in something called laminar flow. Laminar flow is the standard type of nice ideal fluid flow that everybody likes to deal with as opposed to this turbulent flow that's a little crazy and we need computers to handle. So laminar flow is a fluid flow that doesn't depend on time in a certain sense. So what you can do is you can think about a stream and you've got a boat and you take that boat and you put it in a certain place at the stream and you watch what happens. Now an hour later you take an identical boat you put it in the same place in the stream if it does exactly the same thing then that means that the flow in the stream was laminar flow. Alright so let's imagine that we've got the laminar flow of a fluid through a pipe that changes its cross sectional area. So we got like a bottle neck right? So here it comes in with large cross sectional area and now it goes into this small cross sectional area space and I'd like to know what is the relationship between these 2 speeds? Well the continuity equation tells us that the product of area and speed has to be constant. Alright so this form of the continuity equation is only valid when the fluid is incompressible which means it has constant density and in this case large area equals small speed. Now that's basically because if av equals constant, and I double the area well then I've got a factor of 2 there. But I got to absorb that factor somewhere because the area times the speed was constant. So if I got double the area that guy becomes twice as big got only have half a speed half as much speed. Right so large area small speed alright let's look at why this is the case. So let's consider in a time delta t, what's going on in this weird pipe? Alright so when time delta t the fluid over on the left in the big cross sectional area piece is going to go a distance v1 times delta t alright and it's got this cross sectional area a1. In that same time period, the fluid over here in the small cross sectional area piece is going to go a distance v2 times delta t alright. Now the idea is, that the total mass of fluid inside of this pipe didn't change, after delta t goes by I had this piece pushing over into this piece. This whole piece in between didn't change is the same as that was before. So that means that the mass associated with this must be the same as the mass associated with this. Okay well mass is density times volume, so it's density times the volume is a1v1 delta t density times the volume over here in the small part is a2v2 delta t will cancel out the delta t's and that gives us the general form of the continuity equation, density times area times speed is constant. If the density itself is constant, with this incompressible then the density will cancel and we'll have the continuity equation. Alright now let's think of some specific examples in which the continuity equation can be brought out and we can actually see what's going on. One of the best examples I know of has to do with water coming out of a facet, alright as the water comes out of the facet starts off the top basically at less maybe moving a little bit, it's not moving real fast you know I mean I'm not turning it way on and then as it falls it speeds up. So that means that my speed gets bigger, but the area times the speed is supposed to be constant. So if the speed gets bigger the area is going to have to get smaller. And so that means that you'll see the column of water come down like that and I don't know if you've ever really watched water coming out of a facet very carefully but you will see that effect. And that's directly from this continuity equation. Now another situation, notice what we said about this bottle neck over here, we said large area equals small speed, small area equals large speed. Anybody who's driven in traffic knows that that's not the case in traffic. You get a bottle neck the area goes down the speed goes way down. So what's going on? Is the continuity equation not make any sense? The issue is that when you've got a bottle neck in traffic the density goes way up because the cars get much closer together, those cars have got to go in and actually merge and be in that small cross sectional area. So if we look over at the more general form we see that the density goes up, way up. The area goes down some but it actually doesn't go down enough to compensate for the increase in density and that means that the speed got to go down too. So that's the situation with traffic problem and those are some examples of the continuity equation.
http://www.brightstorm.com/science/physics/solids-liquids-and-gases/continuity-equation/
13
60
In physics, damping is an effect that reduces the amplitude of oscillations in an oscillatory system (except for mass-dominated systems where √2), particularly the harmonic oscillator. This effect is linearly related to the velocity of the oscillations. This restriction leads to a linear differential equation of motion, and a simple analytic solution. In mechanics, damping may be realized using a dashpot. This device uses the viscous drag of a fluid, such as oil, to provide a resistance that is related linearly to velocity. The damping force Fc is expressed as follows: where c is the viscous damping coefficient, given in units of newton seconds per meter (N s/m) or simply kilograms per second. In engineering applications it is often desirable to linearize non-linear drag forces. This may by finding an equivalent work coefficient in the case of harmonic forcing. In non-harmonic cases, restrictions on the speed may lead to accurate linearization. Generally, damped harmonic oscillators satisfy the second-order differential equation: The value of the damping ratio ζ determines the behavior of the system. A damped harmonic oscillator can be: - Overdamped (ζ > 1): The system returns (exponentially decays) to equilibrium without oscillating. Larger values of the damping ratio ζ return to equilibrium more slowly. - Critically damped (ζ = 1): The system returns to equilibrium as quickly as possible without oscillating. This is often desired for the damping of systems such as doors. - Underdamped (0 < ζ < 1): The system oscillates (at reduced frequency compared to the undamped case) with the amplitude gradually decreasing to zero. - Undamped (ζ = 0): The system oscillates at its natural resonant frequency (ωo). In physics and engineering, damping may be mathematically modelled as a force synchronous with the velocity of the object but opposite in direction to it. If such force is also proportional to the velocity, as for a simple mechanical viscous damper (dashpot), the force may be related to the velocity by where c is the damping coefficient, given in units of newton-seconds per meter. This force may be used as an approximation to the friction caused by drag. While friction is related to , if the velocity is restricted a small range, this non-linear effect may be small. In such a situation, a linearized friction coefficient may be determined which produces little error compared with the second order solution. Example: mass–spring–damper and a damping force Since Ftot = Fs + Fd, This differential equation may be rearranged into The following parameters are then defined: The first parameter, ω0, is called the (undamped) natural frequency of the system . The second parameter, ζ, is called the damping ratio. The natural frequency represents an angular frequency, expressed in radians per second. The damping ratio is a dimensionless quantity. The differential equation now becomes Continuing, we can solve the equation by assuming a solution x such that: Substituting this assumed solution back into the differential equation gives which is the characteristic equation. Solving the characteristic equation will give two roots, γ+ and γ−. The solution to the differential equation is thus where A and B are determined by the initial conditions of the system: System behavior The behavior of the system depends on the relative values of the two fundamental parameters, the natural frequency ω0 and the damping ratio ζ. In particular, the qualitative behavior of the system depends crucially on whether the quadratic equation for γ has one real solution, two real solutions, or two complex conjugate solutions. Critical damping (ζ = 1) When ζ = 1, there is a double root γ (defined above), which is real. The system is said to be critically damped. A critically damped system converges to zero as fast as possible without oscillating. An example of critical damping is the door closer seen on many hinged doors in public buildings. The recoil mechanisms in most guns are also critically damped so that they return to their original position, after the recoil due to firing, in the least possible time. In this case, with only one root γ, there is in addition to the solution x(t) = eγt a solution x(t) = teγt: where and are determined by the initial conditions of the system (usually the initial position and velocity of the mass): Over-damping (ζ > 1) When ζ > 1, the system is over-damped and there are two different real roots. An over-damped door-closer will take longer to close than a critically damped door would. The solution to the motion equation is: where and are determined by the initial conditions of the system: Under-damping (0 ≤ ζ < 1) Finally, when 0 < ζ < 1, γ is complex, and the system is under-damped. In this situation, the system will oscillate at the natural damped frequency ωd, which is a function of the natural frequency and the damping ratio. To continue the analogy, an underdamped door closer would close quickly, but would hit the door frame with significant velocity, or would oscillate in the case of a swinging door. In this case, the solution can be generally written as: represents the damped frequency or ringing frequency of the system, and A and B are again determined by the initial conditions of the system: This "damped frequency" is not to be confused with the damped resonant frequency or peak frequency ωpeak. This is the frequency at which a moderately underdamped (ζ < 1/√) simple 2nd-order harmonic oscillator has its maximum gain (or peak transmissibility) when driven by a sinusoidal input. The frequency at which this peak occurs is given by: For an under-damped system, the value of ζ can be found by examining the logarithm of the ratio of succeeding amplitudes of a system. This is called the logarithmic decrement. Alternative models Viscous damping models, although widely used, are not the only damping models. A wide range of models can be found in specialized literature. One is the so-called "hysteretic damping model" or "structural damping model". When a metal beam is vibrating, the internal damping can be better described by a force proportional to the displacement but in phase with the velocity. In such case, the differential equation that describes the free movement of a single-degree-of-freedom system becomes: where h is the hysteretic damping coefficient and i denotes the imaginary unit; the presence of i is required to synchronize the damping force to the velocity (xi being in phase with the velocity). This equation is more often written as: where η is the hysteretic damping ratio, that is, the fraction of energy lost in each cycle of the vibration. Although requiring complex analysis to solve the equation, this model reproduces the real behaviour of many vibrating structures more closely than the viscous model. A more general model that also requires complex analysis, the fractional model not only includes both the viscous and hysteretic models, but also allows for intermediate cases (useful for some polymers): where r is any number, usually between 0 (for hysteretic) and 1 (for viscous), and A is a general damping (h for hysteretic and c for viscous) coefficient. See also - MathWorld--A Wolfram Web Resource - Katsuhiko Ogata (2005). System Dynamics (4th ed.). University of Minnesota. p. 617. - Ajoy Ghatak (2005). Optics, 3E (3rd ed.). Tata McGraw-Hill. p. 6.10. ISBN 978-0-07-058583-6. - Weisstein, Eric W. "Damped Simple Harmonic Motion--Critical Damping." From MathWorld--A Wolfram Web Resource. - Weisstein, Eric W., "Damped Simple Harmonic Motion--Overdamping.", MathWorld. - Weisstein, Eric W. "Damped Simple Harmonic Motion--Underdamping." From MathWorld--A Wolfram Web Resource. - Lincoln D. Jones (2003). Electrical Engineering License Review (8th ed.). Dearborn Trade Publishing. p. 6‑15. ISBN 978-0-7931-8529-0. - Millard F. Beatty (2006). Principles of engineering mechanics. Birkhäuser. p. 167. ISBN 978-0-387-23704-6. Komkov, Vadim (1972) Optimal control theory for the damping of vibrations of simple elastic systems. Lecture Notes in Mathematics, Vol. 253. Springer-Verlag, Berlin-New York. |Look up damping in Wiktionary, the free dictionary.|
http://en.wikipedia.org/wiki/Damping
13
55
posted July 7, 2005 Squeezing and Heating Rock to Scope Out How Metallic Iron Dribbled to the Center of the Earth--- Experiments showing how cobalt and nickel concentrate in molten metal shed light on the formation of Earth's metallic core. Written by G. Jeffrey Taylor Cosmochemists have tackled this problem by doing experiments at high pressure and temperature to map out how cobalt and nickel partitioning between metal and silicate differs compared to low pressure. However, the studies differ in their predictions of the behavior because of differences in the assumed pressure, temperature, and oxidation state during core formation. Nancy Chabot (Case Western Reserve University, now at the Johns Hopkins Applied Physics Laboratory), and David Draper and Carl Agee from the University of New Mexico addressed the discrepancies by designing a series of experiments over a wide range in temperature. Their results plot out the conditions under which metal can sink to the core while leading to the observed cobalt and nickel concentrations in the mantle. While the results do not lead to a unique solution, they point the way for further studies of other elements that tend to concentrate in metallic iron, and they show clearly that the equal nickel and cobalt concentrations in the mantle can be the product of core formation in the early Earth. Journey to the Center of the Earth A giant steel ball sits at the center of the Earth. The ball reaches about half way to the surface, thereby occupying an eighth of the Earth's volume. The metallic ball, called the core, is surrounded by a rocky shell called the mantle. Because iron is much denser than rock, the core makes up about 30% of the total mass of the Earth. We know all that mostly from knowing the density of the planet, which indicates that there must be something denser than rock inside, and from its moment of inertia, which says that the mass is not distributed uniformly--the deep interior is denser than shallower parts. The idea that the dense sphere could be made of iron was inspired by finding iron meteorites and figuring that they could be the cores of asteroids. Studies of earthquakes determine the size of the core quite precisely and show that there is an inner solid metallic core. Move your cursor over the button below to view drawing of Earth's interior layers. One of the most amazing scientific feats has been to determine what the Earth is made of inside. Geophysical data establish that a large sphere of metallic iron occupies the deep interior of the Earth. Its center has solidified, but is still hot. It is surrounded by a swirling mass of molten metallic iron and other metals. Motions in that liquid outer core produce Earth's magnetic field. The core is surrounded by a rocky layer called the mantle. The mantle is hot, but solid almost everywhere except in places near the top where low pressure permits it to melt partially. The topmost layer is the crust, which we live on. It is thinner than the picture shows. An intriguing problem in planetary science is figuring out how the core formed. Draining Metal to Middle Earth How did the metallic iron get to the middle of the Earth? The answer would seem obvious: iron is dense, hence heavy, so it should sink. But it has to sink through rock, which even when hot is strong enough to support fairly hefty masses of iron, and the iron has to migrate to form large, sinkable pods. The simplest way to separate iron is to melt the Earth, or at least a large portion of it. The metallic iron would also melt and fall as droplets to form a core. This implies a hot origin for the Earth. A hot infant Earth was out of vogue for decades. Many geophysicists thought that it formed cold from cold dust and then heated up slowly as decay of radioactive elements like potassium, uranium, and thorium dumped heat into the mantle. The core was thought to have formed up to a billion years after formation of the planet. Two things changed that view. One was the discovery that when the Moon formed it was surrounded by a huge magma system, hundreds of kilometers deep (see PSRD article: Moonbeams and Elements). The other is that measurements of short-lived isotopes such as tungsten-182 show that the metallic cores of the Earth, Moon, and Mars must have formed within 50 million years of the formation of the oldest materials in the solar system (see PSRD article: Hafnium, Tungsten, and the Differentiation of the Moon and Mars). Since core formation requires a high temperature, the isotope data show that it happened very early in the planet's history. Existence of the lunar magma ocean inspired cosmochemists to devise models depicting a largely molten Earth, including a significant magma ocean (see graphic below). The models use the tendency of some elements to concentrate in metallic iron rather than in silicate magma. Elements that concentrate in iron are called siderophile, which means "iron loving." Siderophile elements have different degrees of affection for iron, which provides cosmochemists with a way to decipher the conditions of core formation. In this model of Earth's core formation, metallic iron sinks in the partially molten Earth that is surrounded by a magma ocean. The chief uncertainty is in the thickness of the magma ocean, which affects both its pressure and temperature. Other factors also add to the uncertainty in calculating the behavior of elements during core formation, such as its oxidation state and composition of the molten silicate and concentration of other elements in metallic iron. Elements in this graph are plotted along the bottom in order of increasing tendency to concentrate in metallic iron. The concentrations represent the concentration in the mantle divided by the concentration in carbonaceous chondrites, which are thought to approximate the relative element abundances of the initial material from which the planets formed. Plotting close to the CI line (equal to 1) indicates little difference compared to carbonaceous chondrites. Most elements plot below the line, indicating that they are depleted in the mantle. Co and Ni are depleted to the same extent, but calculations (red dots on the graph) using their behavior at low pressure predict that Ni ought to be depleted 100 times as much as Co. This has led cosmochemists to investigate the chemical behavior of Co and Ni at high pressures. The depletion of Co and Ni to the same extent is at odds with predictions from experiments at low pressure (one atmosphere, the same as the surface of the Earth). Element behavior is characterized by use of partition coefficients (D), the concentration of an element in metallic iron divided by its concentration in co-existing molten silicate. The low-pressure partition coefficients for Co and Ni have been measured previously. The surprise is that the partition coefficient for Ni is 100 times higher than for Co. This implies that Ni should be depleted 100 times as much as Co. This discrepancy led cosmochemists to think up explanations for the nickel excess. Some of these ideas are listed below (from a very useful list in Alex Halliday's chapter about the Earth in volume 1 of the Treatise on Geochemistry): |Equilibration between metal and silicate at high pressure--elements behave differently at high pressure, so perhaps Co and Ni partitioning will be different.| |Equilibration between iron metal enriched in sulfur--partitioning also depends on the compositions of the metal and silicate, and on the temperature. If the metal contained more sulfur than the metal used in experiments, it might accept less Ni. Also, it would be liquid at a lower temperature (adding sulfur depresses the melting temperature), and temperature is another important factor that affects element partitioning.| |Inefficient core formation--leaving behind some of the metallic iron-nickel in the mantle would raise the levels of all the siderophile elements.| |Heterogeneous accretion of the Earth, leading to the addition of what has been called the "late veneer." The term is misleading, as the idea did not depict addition of a thin coating of metal-bearing silicates being added to the growing Earth. It is really late addition to the upper mantle of material rich in siderophile elements.| |Addition of material to the Earth during the moon-forming event involving impact of a giant projectile--this is the widely accepted (though not proven) idea that a Mars-sized impactor hit the Earth during its growth, forming the Moon from debris flung into orbit (see PSRD article: Origin of the Earth and Moon).| |Equilibration at extremely high temperature--as noted, element behavior changes as temperature changes.| |High-temperature equilibration in a magma ocean at the boundary between what was to become the lower and upper mantle.| It is the last idea that has received the most attention recently and is the focus of the measurements made by Nancy Chabot and her colleagues. Cosmochemists have made such measurements before, but not at the full range of conditions possible in a magma ocean surrounding the infant Earth. For example, pressures predicted in a magma ocean have ranged from 24 to 59 GPa and temperatures have ranged from 2200 to more than 4000 K (4500 to 7700 oF). Also, results of the predictions did not agree with each other (see graphs below), leading to differences for the depth and temperature calculated for the hypothetical magma ocean. Previously predicted behaviors of the partition coefficients (D) for Ni and Co as a function of temperature are drastically different because of differences in the predicted effects of temperature during core formation. The vertical axis is the ratio of the partition coefficient at a given temperature to the partition coefficient at 1900 K. (Partition coefficient in this case means the concentration of Co or Ni in metallic iron divided by the concentration in molten silicate (rock).) The comparison to the D at 1900 K helps show the variation of D with temperature. Temperature is plotted as 1/T (times 1000) because many temperature-dependent parameters vary linearly when plotted versus 1/T rather than T directly. It is important to obtain a better quantitative handle on core formation. Here's one reason why: Kevin Righter (Johnson Space Center) and Michael Drake (University of Arizona) used measurements of Ni and Co partitioning between metal and silicate to calculate the pressure and temperature conditions in a terrestrial magma ocean that would give the correct depletion factors for Co and Ni. However, the temperature they needed to make the concentrations work out right was lower than the melting temperature of mantle rock at high pressure. This led them to conclude that the magma ocean contained water, which would lower the melting temperature. The presence of water has enormous implications for how and when the Earth received the water that ended up in the oceans--but are the temperature estimates correct? Nancy Chabot realized that more experiments were needed, especially to understand the effect of temperature on partition coefficients. Experiments Plumb the Depths of the Magma Ocean Inside the Earth or in a magma ocean surrounding the Earth as it was forming, the temperature is high and pressures are crushing. To simulate those conditions, cosmochemists use special high-pressure equipment. Chabot and her colleagues used a large device to squeeze a multi-anvil to high pressure (see photograph below). The device was located at the Johnson Space Center and then moved to the University of New Mexico, where co-authors Dave Draper and Carl Agee now work. The samples are placed into a small octahedral sample holder made of a ceramic material. The samples are surrounded by aluminum oxide capsules and sleeves and by rhenium metal, which can be heated to high temperatures. The entire sample is placed inside a huge press and the pressure increased to the desired level. For this set of experiments, Chabot used 7 GPa (70,000 times atmospheric pressure). The samples were a mixture of basalt and metal powders. Basalt is a type of lava and is different from the composition of the terrestrial magma ocean, but it allowed the investigators to run experiments at a wide range of temperatures. Such a wide range is not possible with rocks that represent the composition of the Earth's mantle because of their high melting temperature. The important thing in these experiments is the presence of molten silicate and molten metal. The metallic powders consisted of iron with 4 wt% Co and 10 wt% Ni mixed in, and variable amounts of carbon. The amount of carbon varied from none to about 6%. It was added to test the effect of carbon on the Co and Ni partition coefficients. It also lowers the melting temperature of the metal, raising the range of temperatures where the metallic phase remains liquid. The products of the experiments were blobs of metallic Fe-Ni-C embedded in fine-grained silicates (see figures below). The metal assembled into spherical blobs that when cooled rapidly at the end of each experiment crystallized into branching crystals of metallic Fe-Ni-Co surrounded by spots of carbon-rich metal. The silicate consisted of long crystals of garnet surrounded by tiny crystals of other minerals (not identified) and glass. Ordinarily, a basalt like the one used would crystallize pyroxene and plagioclase feldspar, but at the high pressure and with the aluminum oxide capsules in these experiments, garnet formed instead. Both metal and silicate were analyzed using an electron microprobe. In the experimental products, blobs of metallic iron containing nickel and carbon formed in the molten silicate. These back scattered electron images were taken with an electron microprobe. When cooled rapidly at the end of an experiment, the metal formed crystals of carbon-free Fe-Ni (white in the top right photograph) surrounded by dark areas of carbon-rich metal. Silicate cooling produces garnet (dark in bottom right photograph) surrounded by other silicate minerals, oxides, and glass. The metal and silicates were analyzed with an electron microprobe using a defocused beam 20 to 50 micrometers in diameter. Electron microprobes focus a beam of electrons on a sample. The electrons produce X-rays from elements in the sample. The X-rays are characteristic of each element and the number of X-rays (counted with a special detector) is proportional to the amount present. Cobalt and Nickel Behavior Quantified Five factors might affect the way Co and Ni partition into metal and silicates: (1) the oxidation state of the metal and silicate system, (2) the composition of the silicate, (3) pressure, (4) the composition of the metallic liquid, and (5) temperature. Previous experiments have shown how the first three factors affect Co and Ni partitioning; the experiments by Chabot and her colleagues address the effects of metallic composition and temperature. Each factor is discussed briefly below. Oxidation state of magma is expressed as the oxygen fugacity, which is a measure of the amount of oxygen available for reaction. Although oxygen makes up about half of almost every rock, the amount that is not already bound to other elements is the oxygen fugacity. It is as if there is a tenuous oxygen atmosphere present. The oxygen fugacity is usually expressed as the variation with respect to some mineral assemblage. For example, if quartz (SiO2), fayalite (Fe2SiO4), and magnetite (Fe3O4) are all present, they will keep the oxygen fugacity constant. They buffer it. Chabot estimated the oxygen fugacity compared to the iron-wustite buffer (Fe metal and FeO) by measuring the concentrations of iron in metal and iron oxide in silicates. Previous results have shown that values of the logarithms of the Co and Ni partition coefficients are proportional to -0.5 times ΔIW, where ΔIW is the oxygen fugacity relative to the fugacity at the iron-wustite buffer. The lower the oxygen fugacity, the lower the ΔIW value. Because of the logarithmic dependence, a value of -1 is a factor of ten below the iron-wustite buffer; a value of -3 is 1000 times lower. Silicate composition does not make much difference in the case of Co and Ni. Both are divalent (doubly charged ions) in rocky melts over a wide range of oxygen fugacity. Pressure makes a big difference. Six previous experimental studies determined that with increasing pressure, Co and Ni both partition less strongly into the metallic melt, but the effect of pressure is more pronounced for Ni than for Co. The new experiments by Chabot and colleagues show that the composition of the metallic liquid does not make any significant difference in the way Co and Ni partition between metal and silicate. The experiments were done with a range of carbon concentrations and Chabot found no significant differences in partitioning as a function of carbon content. That factor can be safely ignored. Sulfur can have an effect on the partitioning of Co and Ni, but at sulfur concentrations relevant to core formation in the Earth (<10 wt%), that factor can also be safely disregarded. The variation of Co and Ni partitioning with temperature is where the experiments done by Chabot and co-authors greatly expand our understanding of the terrestrial magma ocean. The great variety of predicted behaviors of Co and Ni described above were due in part to too small a range of temperatures used in previous experiments. The Chabot experiments fill in the big gaps. The experiments show that Co and Ni behavior changes with temperature. The partition coefficients for each decrease with increasing temperature, but the change with temperature is greater for Ni (see graph below). Partition coefficients for Ni and Co vary with temperature. Partition coefficients are plotted on a logarithmic scale (using the natural logarithm) and temperature is plotted as the inverse temperature times 1000. Using the Measurements Once the experiments were complete, Chabot and her teammates wanted to use the new and previous data to understand conditions in a terrestrial magma ocean. The hard part of doing that is that the partition coefficients of Co and Ni vary with temperature, pressure, and oxygen fugacity, all at once. They solved this problem by using all available data to devise an equation that captures the variation with temperature, pressure, and oxygen fugacity. Others have created such equations previously and used them to model core formation, but they did not have the benefit of knowing the full range of variation with temperature. The Chabot equation does not include the effects of silicate or metal compositions, which Chabot shows are not significant. (Other elements in the metal besides carbon and sulfur, such as silicon, hydrogen, and oxygen, might affect the partitioning of Co and Ni. More experiments are needed to test how significant those effects would be.) The parameterized equation allowed Chabot and associates to compare their results to those obtained by others. In the graphs below (shown earlier but without the new data) partition coefficients for Ni and Co are shown relative to their value at 1900 K (thus they all plot at 1 there) for a pressure of 7 GPa and 1.5 log units below the IW buffer. The new data plot inside the range given by previous reports, but show that the prediction of increasing Co partition coefficient with increasing temperature is not consistent with the new results. The new data show that Co and Ni partition coefficients both decrease with decreasing temperature, and fall between most previous estimates. The parameterized equation also allowed Chabot to calculate what conditions could lead to an equal amount of partitioning of Co and Ni. The down side is that there is not one unique solution. Instead, plotting temperature against pressure leads to a large field of acceptable solutions. The acceptable solution set includes all of those suggested by previous research (see graph below). More importantly, the conditions show that Co and Ni abundances in the Earth's mantle can be matched by core formation at high pressure and temperature and low oxygen fugacity, an idea previously rejected by Righter and Drake on the basis of available experimental data. Righter and Drake instead suggested that lower temperatures were required, which drove them to infer that water was present in the mantle. (Water lowers the melting temperature of magma.) Chabot's new results do not disprove the wet magma ocean idea, but show that other possibilities are in the running. Lightly shaded area represents all solutions that result in Co and Ni being depleted to the same extent in the Earth's mantle. The darker blue wedges show solutions at the indicated oxidation conditions; they are consistent with results from other studies. The calculations show that a low temperature, water-bearing magma ocean is not required to produced the observed Co and Ni concentrations in the mantle, but do not rule it out. An Unfinished Job Like most questions in cosmochemistry, this one is far from answered. There are many complications that the experiments do not take into account. A major one is that the metal and silicate may equilibrate over a range of pressures as metal dribbled to the core, not just one pressure. Future calculations need to take that into account. In addition, there are other siderophile elements whose behavior under a range of conditions needs to be determined. Those elements, especially those cosmochemists call moderately siderophile, might help us narrow down the range of possible conditions in the terrestrial magma ocean. Studies like these also relate to measurements of the time for core formation using the concentrations of tungsten isotopes, models of the formation of the Earth and Moon, and investigation of cases where the silicate is only slightly molten. It is a fascinating, interdisciplinary problem whose solution will lead to an improved understanding of a major event in the history of our planet. LINKS OPEN IN A NEW WINDOW. [ About PSRD | [ Glossary | General Resources | Comments | Top of page ]
http://www.psrd.hawaii.edu/July05/cobalt_and_nickel.html
13
58
The velocity of an object is simply its speed in a particular direction. Note that both speed and direction are required to define a velocity. The velocity (v) is an physical quantity of the motion. A change in an object's velocity can therefore arise from either a change in its speed or in its direction. For example an aeroplane that is circling at a constant speed of 200km/h is changing its velocity because it is continously changing its direction. A aeroplane that is taking-off may go from zero to 200km/h in a straight line and so would also be changing its velocity. A change in velocity is called an acceleration. Objects are only accelerated if a force is applied to them. (The amount of acceleration depends the size of the force and the mass of the object being shifted, see Newton's Second Law of Motion.) In the case of the circling aeroplane, the pilot banks to use the force of lift from the wings to change direction. In another example the Space Shuttle orbits the earth at a constant speed but is constantly changing its velocity because of the circular orbit. In this case the force causing the acceleration is provided by the earth's gravity acting on the shuttle. The average speed v of an object moving a distance d during a time interval t is described by the formula: Acceleration is the rate of change of an object's velocity over time. The average acceleration of a of an object whose speed changes from vi to vf during a time interval t is given by: Where = an object's initial velocity and = the object's final velocity over a period of time t Velocity (symbol: v) is a vector measurement of the rate and direction of motion. The scalar absolute value (magnitude) of velocity is speed. Velocity can also be defined as rate of change of displacement or just as the rate of displacement, depending on how the term displacement is used. It is thus a vector quantity with dimension length/time. In the SI (metric) system it is measured in metre per second The instantaneous velocity vector v of an object that has position at time t is given by x(t) can be computed as the derivative The instantaneous acceleration vector a of an object that has position at time t is given by x(t) is The equation for an object's velocity can be obtained mathematically by taking the integral of the equation for its acceleration beginning from some initial period time to some point in time later . The final velocity vf of an object which starts with velocity vi and then accelerates at constant acceleration a for a period of time t is: The average velocity of an object undergoing constant acceleration is (vi + vf)/2. To find the displacement d of such an accelerating object during a time interval t, substitute this expression into the first formula to get: When only the object's initial velocity is known, the expression can be used. These basic equations for final velocity and displacement can be combined to form an equation that is independent of time, also known as Torricelli's Equation: The above equations are valid for both classical mechanics and special relativity. Where classical mechanics and special relativity differ is in how different observers would describe the same situation. In particular, in classical mechanics, all observers agree on the value of t and the transformation rules for position create a situation in which all non-accelerating observers would describe the acceleration of an object with the same values. Neither is true for special relativity. The kinetic energy is a scalar quantity. In polar coordinates, a two-dimensional velocity can be decomposed into a radial velocity, defined as the component of velocity away from or toward the origin, and transverse velocity, the component of velocity along a circle centred at the origin, and equal to the distance to the origin times the angular velocity. Angular momentum in scalar form is the distance to the origin times the transverse speed, or equivalently, the distance squared times the angular speed, with a plus or minus to distinguish clockwise and anti-clockwise direction. If forces are in the radial direction only, as in the case of a gravitational orbit, angular momentum is constant, and transverse speed is inversely proportional to the distance, angular speed is inversely proportional to the distance squared, and the rate at which area is swept out is constant. These relations are known as Kepler's laws of planetary motion.
http://engineering.wikia.com/wiki/Velocity
13
88
Addressing, Routing, and Multiplexing To deliver data between two Internet hosts, it is necessary to move data across the network to the correct host, and within that host to the correct user or process. TCP/IP uses three schemes to accomplish these tasks: Addressing : IP addresses deliver data to the correct host. Routing : Gateway deliver data to the correct network. Multiplexing : Protocol and port numbers deliver data to the correct software module within the host. Each of these functions is necessary to send data between two co-operating applications across the Internet. IP Host Address: The Internetwork Protocol identifies hosts with a 32-bit number called IP address or a host address. To avoid confusion with MAC addresses, which are machine or station addresses, the term IP address will be used to designate this kind of address. IP addresses are written as four dot-separated decimal numbers between 0-255. IP addresses must be unique among all connected machines (are any hosts that you can get over a network or connected set of networks, including your local area network, remote offices joined by the company's wide-area network, or even the entire Internet community). The Internet Protocol moves data between the hosts in the form of datagrams. Each datagram is delivered to the address contained in the destination address of the datagrams header. The Destination Address is a standard 32-bit IP address that contains sufficient information to uniquely identify a network and a specific host on that network. If your network is connected to the Internet, you have to get a range of IP addresses assigned to your machines through a central network administration authority. The IP address uniqueness requirement differs from the MAC addresses. IP addresses are unique only on connected networks, but machine MAC addresses are unique in the world, independent of any connectivity. Part of the reason for the difference in the uniqueness requirement is that IP addresses are 32-bits, while MAC addresses are 48-bits, so mapping every possible MAC address into an IP address requires some overlap. Of course, not every machine on a Ethernet is running IP protocols, so the many-to-one mapping isn't as bad as the numbers might indicate. There are a variety of reasons why the IP address is only 32 bits, while the MAC address is 48 bits, most of which are historical. Since the network and data link layer use different addressing schemes, some system is needed to convert or map the IP addresses to the MAC addresses. Transport-layer services and user processes use IP addresses to identify hosts, but packets that go out on the network need MAC addresses. The Address Resolution Protocol (ARP) is used to convert the 32-bit IP address of a host into its 48-bit MAC address. When a hosts wants to map an IP address to a MAC address, it broadcasts an ARP request on the network, asking for the host using the IP address to respond. The host that sees its own IP address in the request returns its MAC address to the sender. With a MAC address, the sending host can transmit a packet on the Ethernet and know that the receiving host will recognise it. IP Address Classes: An IP address contains a network part and a host part, but the format of these parts in not the same in every IP address. Figure 87 shows the IP address classes. Not all network addresses or host addresses are available for use. The class A addresses, 0 and 127, that are reserved for special use. Network 0 designates the default route (is used to simplify the routing information that IP must handle) and network 127 is the loopback address (simplifies network applications by allowing the local host to be addressed in the same manner as a remote host). We use the special network addresses when configuring a host. There are also some host addresses reserved for special use. In all network classes, host number 0 and 255 are reserved. An IP address with all host bits set to zero identifies the network itself. Addresses in this form are used in routing table listings to refer to entire networks. An IP address with all bits set to one is a broadcast address (is used to simultaneously address every host on a network). A datagram sent to this address is delivered to every individual host on that network. IP uses the network portion of the address to route the datagram between networks. The full address, including the host information, is used to make final delivery when the datagram reaches the destination network. Figure 88 shows host communication on a local network. The standard structure of an IP address can be locally modified by using host address bits as additional network address bits. Essentially, the dividing line between network address bits and host bits is moved, creating additional networks, but reducing the maximum number of hosts that can belong to each network. These newly designed network bits define a network within the larger network, called a subnet. Subnetting allows decentralised management of host addressing. With the standard addressing scheme, a single administrator is responsible for managing host addresses for the entire network. By subnetting, the administrator can delegate address assignment to smaller organisations within the overall organisation. Subnetting can also be used to overcome hardware differences and distance limitations. IP routers can link dissimilar physical networks together, but only if each physical network has its own unique network address. Subnetting divides a single network address into many unique subnet addresses, so that each physical network can have its own unique address. Figure 89 shows IP addresses with and without subnetting. A subnet is defined by applying a bitmask, the subnetmask, to the IP address. If a bit is on the mask, that equivalent bit in the address is interpreted as a network bit. If the bit in the mask is off, the bit belongs to the host part of the address. The subnet is only known locally. To the rest of the Internet, the address is still interpreted as a standard IP address. Figure 90 shows host communication with subnetting. As networks grow in size, so does the traffic imposed on the wire, which in turn impacts the overall network performance, including responses. To alleviate such a degradation, network specialist resort to breaking the network into multiple networks that are interconnected by specialised devices, including routers, bridges, and switches. The routing approach calls on the implementation of various co-operative processes, in both routers and workstations, whose main concern is to allow for the intelligent delivery of data to its ultimate destination. Data exchange can take place between any workstation, whether or not both belong to the same network. Figure 91 shows a view of routing. Figure 91 emphasises that the underlying physical networks that a datagram travels through may be different and even incompatible. Host A1 on the Token Ring network routes the datagram through gateway G1, to reach host B1 on the Ethernet. Gateway G1 forwards the data through the X.25 network to gateway G2, for delivery to B1. The datagram traverses three physical different networks, but eventually arrives intact at B1. A good place to start when discussing routers is with a through discussion of the addresses, including MAC addresses, network addresses, and the complete addresses. The Routing Table: To perform its function reliably, the routing process is equipped with the capability to maintain a road map depicting the entire internetwork of which it is part. This road map is commonly referred to as the routing table, and it includes routing information depicting every known network is, and how it can be reached. The routing process builds and maintains the routing table by employing a route discovery process known as the Routing Information Protocol (RIP). Routers should be capable of selecting the shortest path connecting two networks. Routers discover the road map of the internetwork by dynamically exchanging routing information among themselves or by being statically configured by network installers, or both. The dynamic exchange of routing information is handled by yet another process besides the routing process itself. In the case of TCP/IP, IP handles the routing process, whereas RIP handles the route discovery process. Internet Routing Architecture: When a hierarchical structure is used, routing information about all of the networks in the internet is passed into the core gateway (a central delivery medium to carry long distance traffic). The core gateway process this information, and then exchange it among themselves using the Gateway-to-Gateway Protocol (GGP). The processed routing information is then passed back out to the external gateways. Figure 92 shows the Internet Routing Architecture. Outside of the Internet Core are groups of independent networks called Autonomous Systems (AS), it is a collection of networks and gateways with its own internal mechanism for collection routing information and passing it to other network systems. The Routing Table: Gateways route data between networks, but all network devices, hosts as well as gateways, must make routing decisions. For most hosts, the routing decisions are simple: If the destination is on the local network, the data is delivered to the destination host. If the destination is on the remote network, the data is forwarded to a local gateway. Because routing is network oriented, IP makes routing decisions based on the network portion of the address. The IP module determines the network part of the destination's IP address by checking the high-order bits of the address to determine the address class. The address class determines the portion of the address that IP uses to identify the network. If the destination network is the local network, the local subnet mask is applied to the destination address. After determining the destination network, the IP module looks up the network in the local routing table. Packets are routed toward their destination as directed by the routing table. The routing table may be built by the system administrator or by routing protocols, but the end result is the same, IP routing decisions are simple table look-ups. Figure 93 shows a flowchart depiction of the IP routing algorithm. You can display the routing table's contents with the netstat -r command. The netstat command displays a routing table containing the following fields: Destination : The destination network or host. Gateway : The gateway to use to reach the specified destination. Flags : The flags describe certain characteristics of this route. U: Indicates that the route is up and operational. H: Indicates this is a route to a specific host. G: Means the route uses a gateway. D: Means that this route was adds because of an ICMP redirect. Refcnt : Shows the number of times the route has been referenced to establish a connection. Use : Shows the number of packets transmitted via this route. Interface : The name of the network interface used by this route. All of the gateways that appear in a routing table are networks directly connected to the local system. A routing table does not contain end-to-end routes. A rout only points to the next gateway, called the next hop, along the path to the destination network. The host relies on the local gateway to deliver the data, and the gateways relies on the other gateways. As a datagram moves from one gateway to another, it should eventually reach one that is directly connected to its destination network, It is this last gateway that finally delivers the data to the destination host. The IP address and the routing table direct a datagram to a specific physical network, but when the data travels across a network, it must obey the physical layer protocol used by that network. The physical networks that underlay the TCP/IP network do not understand IP addressing. Physical networks have their own addressing schemes. and there are as many different addressing schemes as there are different types of physical networks. One task of the network access protocols is to map IP addresses to physical network addresses. Figure 94 show the operation of ARP. The most common example of this network access layer function is the translation of IP addresses to Ethernet addresses. The protocol that performs this function is Address Resolution Protocol (ARP). Figure 95 shows the layout of an ARP request or ARP reply. In figure 95, when an ARP request is sent, all fields in the layout are used except the Recipient Hardware Address (which the request is trying to identify). In an ARP reply, all the fields are used. The fields in the ARP request and reply can have several values. The ARP software maintains a table of translations between IP addresses and Ethernet addresses. This table is built dynamically. When ARP receives a request to translate an IP address, it checks for the address in its table. If the address is found, it returns the Ethernet address in its table. If the address is not found in the table, ARP broadcast a packet to every host on the Ethernet. The packet contains the IP address for which an Ethernet address is sought. If a receiving host identifies the IP address as its own, it responds by sending its Ethernet address back to the requesting host. The response is then cached in the ARP table. The arp -a command display all the contents of the ARP table. Figure 96 shows Routing Domains The Reverse Address Resolution Protocol (RARP), is a variant of the address resolution protocol. RARP also translates addresses, but in the opposite direction. It converts Ethernet addresses to IP addresses. The RARP protocol really has nothing to do with routing data from one system to another. RARP helps configure diskless systems by allowing diskless workstations to learn their IP address. The diskless workstations uses the Ethernet broadcast facility to ask which IP address maps to its Ethernet address. When a server on the network sees the request, it looks up the Ethernet address in the table. If it finds a match, the server replies with the workstation's IP address. Figure 97 shows the interrelationship between IP and Ethernet MAC address as reflected in the Ethernet data frame. In figure 97, Shaded fields correspondent to the destination and source address of host A, (the sender) and Host B (the receiver). Protocols, Ports, and Sockets: Once data is routed through the network and delivered to a specific host, it must be delivered to the correct user or process. As the data moves up or down the layers of TCP/IP, a mechanism is needed to deliver data to the correct protocols in each layer. The system must be able to combine data from many applications into a few transport protocols, and from the transport protocols into the Internet Protocol. Combining many sources of data into a single data stream is called multiplexing. Data arriving from the network must be demultiplexed, divided for delivery to multiple processes. To accomplish this, IP uses protocol numbers to identify transport protocols, and the transport protocols use port numbers to identify applications. Figure 98 shows Protocol and Port Numbers. Figure 99 shows the protocol interdependency between Application level protocols and Transport level protocols. Is a single byte in the header of the datagram. The value identifies the protocol in the layer above IP to which the data should be passed. A host may have many TCP and UDP connections at any time. Connections to a host are distinguished by a port number, which serves as a sort of mailbox number for incoming datagrams. There may be many processes using TCP and UDP on a single machine, and the port numbers distinguish these processes for incoming packets. When a user program opens a TCP or UDP socket, it gets connected to a port on the local host. The application may specify the port, usually when trying to reach some service with a well-defined port number, or it may allow the operating system to fill in the port number with the next available free port number. After IP passes incoming data to the transport protocol, the transport protocol passes data to the correct application process. Application processes are identified by port numbers, which are 16-bit values. The source port number, which identifies the process that sent the data, and the destination port number, which identifies the process that is to receive the data are contained in the header of each TCP segment and UDP packet. Port numbers are not unique between transport layer protocols, the numbers are only unique within a specific transport protocol. It is the combination of protocol and port numbers that uniquely identifies the specific process the data should be delivered to. Figure 100 shows data packets multiplexed via TCP or UDP through port addresses and onto the targeted TCP/IP applications. In figure 100, if a data packet arrives specifying a transport protocol of 6, it is forwarded to the TCP implementation. If the packet specifies 17 as the required protocol, the IP layer would forward the packet to the programs implementing UDP. Figure 101 shows the exchange of port numbers during the TCP handshake. In figure 101, the source host randomly generates a source port, in this example 3044. It sends out a segment with a source port of 3044 and a destination port of 23. The destination host receives the segment, and responds back using 23 as it source port and 3044 as its destination port. Well-known ports are standardised port numbers that enables remote computers to know which port to connect to for a particular network service. This simplifies the connection process because both the sender and the receiver know in advance that data bound for a specific process will use a specific port. There is a second type of port number called a dynamically allocated port. As the name implies, this ports are not pre-assigned. They are assigned to processes when needed. The system ensures that it does not assign the same port number to two processes, and that the number assigned are above the range of standard port numbers. She provide the flexibility needed to support multiple users. The combination of an IP address and a port number is called a socket. A socket uniquely identifies a single network process within the entire internet. One pair of sockets, one socket for the receiving host and one for the sending host, define the connection for connection-oriented protocols such as TCP. Names and Addresses: Every network interface attached to a TCP/IP network is defined by a unique 32-bit IP address. A name, called a host name, can be assigned to any device that has an IP address. Names are assigned to devices because, compared to numeric Internet addresses, names are easier to remember and type correctly. The network software doesn't require names, but they do make it easier form humans to use the network. In most cases, host names and numeric addresses can be used interchangeably. Whether a command is entered with an address or a host name, the network connection always takes place based on the IP address. The system converts the host name to an address before the network connection is made. The network administrator is responsible for assigning names and addresses and storing them in the database used for the conversion. There are two methods for translating names into addresses. The older method simply looks up the host name in a table called the host table. The newer technique uses a distributed database system called Domain Name Service (DNS) to translate names to addresses. The Host Table: Is a simple text file that associates IP addresses with host names. Most systems have a small host table containing name and address information about the important hosts on the local network. This small table is used when DNS is not running, such as during the initial system start-up. Even if you use DNS, you should create a small host file containing entries for your host, for localhost, and for the gateway and servers on your local net. Sites that use NIS use the host table as input to the NIS host database. You can use NIS in conjunction with DNS, but even when they are used together, most NIS sites create host tables that have an entry for every host on the local network. Hosts connected to the Internet should use DNS. The Network Information Centre (NIC) Host Table: Maintain a large table of Internet hosts, which is stored on the host. The NIC places host names and addresses into the file for all sites on the Internet. The NIC table contains three types of entries: Network records, gateway records, and host records. Figure 102 shows the format of the Host.txt records. In figure 102, each record begins with a keyword (NET, HOST or GATEWAY) that identifies the record type, followed by an IP address, and one or more names associated with the address. The IP addresses and host names from the Host record are extracted to construct the /etc/hosts file. The network addresses and names from the NET records are used to create the etc/networks file. Domain Name Service (DNS): It is a distributed database system that doesn't bog down as the database grows. It guarantees that new host information will be disseminated to the rest of the network as it is needed to those who are interested. If a DNS server receives a request for information about a host for which it has no information, it passes on the request to an authoritative server (is any server responsible for maintaining accurate information about the domain which is being queried). When the authoritative server answers, the local server saves (caches) the answer for future use. The next time the local server receives a request for this information, it answers the request itself. The ability to control host information from an authoritative source and to automatically disseminate accurate information makes DNS superior to the host table, even for small networks not connected to the Internet. Figure 103 shows resolution of a DNS query. The Domain Hierarchy: DNS is a distributed hierarchical system for resolving host names into IP addresses. Under DNS, there is no central database with all of the Internet host information. The information is distributed among thousands of name servers organised into a hierarchy. DNS has a root domain at the top of the domain hierarchy that is served by a group of name servers called the root server. Information about a domain is found by tracing pointers from the root domain, through subordinate domains, to the target domain. Directly under the root domain are the top level domains. There are two basic types of top-level domains, geographic and organisational. Figure 104 shows Domain Hierarchy. Creating Domains and Subdomains: The Network Information Centre has the authority to allocate domains. To obtain a domain, you apply to the NIC for authority to create a domain under one of the top-level domains. Once the authority to create a domain is granted, you can create additional domains, called subdomains, under your domain. Reflect the domain hierarchy. Domain names are written from most specific, a host name, to least specific, a top-level domain, with each part of the domain name separated by a dot (<host name>.<subdomain>.<domain>). Figure 105 shows organisation of the DNS name space. Network Information Service (NIS): Is an administrative database system that provides central control and automatic dissemination of important administrative files, NIS can be used in conjunction with DNS, or as an alternative to it. NIS and DNS have some similarities and some differences. Like DNS, the NIS overcomes the problem of accurately distributing the host table, nut unlike DNS, it only provides service for the local area networks. NIS is not intended as a service for the Internet as a whole. Another difference is that NIS provides access to a wider range of information than DNS. As its name implies, NIS provides much more than name-to-address conversion. It converts several standard UNIX files into databases that can be queried over the network. These databases are called NIS maps. NIS provides a distributed database system for common configuration files. NIS servers manage copies of the database files, and NIS clients request information from the servers instead of using their own, local copies of these files. Once NIS is running, simply updating the NIS server ensures that all machines will be able to retrieve the new configuration file information A major problem in running a distributed computing environment is maintaining separate copies of common configuration files such as the password, group, and hosts files. Ideally, the network should be consistent in its configuration, so that users don't have to worry about where they have accounts or if they'll be able to find a new machine on the network. Preserving consistency, however, means that every change to one of these common files must be propagated to every host on the network. The Network Information Service (NIS) addresses these problems. It is a distributed database system that replaces copies of commonly replicated configuration files with a centralised management facility. Instead of having to manage each host's files, you maintain one database for each file on one central server. Machines that are using NIS retrieve information as needed from these database. If you add a new system to the network, you can modify on file on a central server and propagate this change to the rest of the network, rather than changing the hosts file for each individual host on the network. Because NIS enforces consistent views of files on the network, it is suited for files that have no host-specific information in them. Files that are generally the same on all hosts in a network, fit the NIS model of a distributed database nicely. NIS provides all hosts information from its global database. Master, Slaves, and Clients: NIS is built on the client-server model. An NIS server is a host that contains NIS data files, called maps. Clients are hosts that request information from these maps. Servers are further divided into master and slave servers: The master server is the true single owner of the map data. Slave NIS servers handle client requests, but they do not modify the NIS maps. The master server is responsible for all map maintenance and distribution to its slave servers. Once an NIS map is built on the master to include a change, the new map file is distributed to all slave servers. NIS clients see these changes when the perform queries on the map file, it doesn't matter whether the clients are talking to a master or a slave server, because once the map data is distributed, all NIS servers have the same information. Figure 106 shows NIS masters, slaves, and clients. With the distinction between NIS servers and clients firmly established, we can see that each system fits into the NIS scheme in one of three ways: Client only: This is a typical of desktop workstations, where the system administrator tries to minimise the amount of host-specific tailoring required to bring a system onto the network. As an NIS client, the host gets all of its common configuration information from an extant server. Server only: While the host services client request for map information, it does not use NIS for its own operation. Server-only configuration may be useful when a server must provide global host and password information for the NIS clients, but security concerns prohibit the server from using these same files. However, bypassing the central configuration scheme opens some of the same loopholes that NIS was intended to close. Although it is possible to configure a system to be an NIS server only, we don't recommend it. Client and server: In most cases, an NIS server also function as an NIS client so that its management is streamlined with that of other client-only hosts. Most precisely, a domain is a set of NIS maps. A client can refer to a map from any of several different domains. Most of the time, however, any given host will only look up data from one set of NIS maps. Therefore, it's common to use the term domain to mean the group of systems that share a set of NIS maps. All systems that need to share common configuration information are put into an NIS domain. Although each system can potentially look up information in any NIS domain, each system is assigned to a default domain, meaning that the system, by default, looks up information from a particular set of NIS maps. It is up to the administrator to decide how many different domains are needed. An interruption in NIS service affects all NIS clients if no other servers are available. Even if another server is available, clients will suffer periodic slowdowns as the recognise the current server is down and hunt for a new one. A second imperative for NIS servers is synchronisation. Clients may get their NIS information from any server, so all servers must have copies of every map file to ensure proper NIS operation. Furthermore, the data in each map file on the slave servers must agree with that on the master server, so that NIS clients cannot get out-of-data or stale data. NIS contains several mechanisms for making changes to map files and distributing these changes to all NIS servers on a regular basis. Remote Procedure Call (RPC): Provides a mechanism for one host to make a procedure call that appears to be part of the local process but is really executed on another machine on the network. Typically, the host on which the procedure call is executed has resources that are not available on the calling host. This distribution of computing services imposes a client/server relationship on the two hosts: The host owning the resource is a server for that resource, and the calling host becomes a client of the server when it needs access to the resource. The resource might be a centralised configuration file (NIS) or a shared filesystem (NFS). Instead of executing the procedure on the local host, the RPC system bundles up the arguments passed to the procedure into a network datagram. The exact bundling method is determined by the presentation layer, described in the next section. The RPC client creates a session by locating the appropriate server and sending the datagram to a process on the server that can execute the RPC. On the server, the arguments are unpacked, the server executes the result, packages the result (if any), and sends it back to the client. Back on the client side, the reply is converted into a return value for the procedure call, and the user application is reentered as if a local procedure call has completed. RPC services may be built on either TCP or UDP transports, although most are UDP-oriented because the are centred short-lived requests. Using UDP also forces the RPC call to contain enough context information for its execution independent of any other RPC request, since UDP packets may arrive in any order, if at all. When an RPC call is made, the client may specify a time-out period in which the call must complete. If the server is overloaded or has crashed, or if the request is lost in transit to the server, the remote call may not be executed before the time-out period expires. The action taken upon an RPC times varies by application, some resend the RPC call, while others may look for another server. Remote Procedure Call Execution: Figure 107 shows Remote Procedure Call Execution. External Data Representation (XDR): Is built on the notion of an immutable network byte ordering, called the canonical form. It isn't really important what the canonical form is, your systems may or may not use the same byte ordering and structure packing conventions. This form simply allows network hosts to exchange structured data independently of any peculiarities of a particular machine. All data structures are converted into the network byte ordering and padded appropriately. The rule of XDR is sender makes local canonical, receivers makes canonical local. Any data that goes over the network is in canonical form. A host sending data on the network converts it to canonical form, and the host that receives the data converts it back into its local representation. A different way to implement the presentation layer might be receiver makes local. In this case, the sender does nothing to the local data, and the receiver must deduce the packing and encoding technique and convert it into the local equivalent, While this scheme may send less data over the network, it places the burden of incorporating a new hardware architecture on the receiving side, rather than on the new machine.
http://www.citap.com/documents/tcp-ip/tcpip013.htm
13
61
Science Investigation Measurement (page 3) Measurement is a most important part of the sciences. Not only do scientists use measuring instruments of many kinds, they also have been in the forefront of developing new measuring tools and standards. While some countries continue to use their own customary measuring units (in the United States we use pounds, feet, gallons, and so on), the International System of Units (SI Units, from the French Systéme International) has been adopted worldwide for both commercial and scientific purposes. The basic SI units include the following: - Meter: length or distance - Kilogram: mass, more commonly called "weight" - Second: time - Ampere: electric current - Candela: light and other radiation - Mole: comparing other substances to the molecular weight of a certain form of carbon - Kelvin: temperature From these basic units come many "derived units." An example is the measurement of area, for which the unit, called the square meter, is derived from the basic length unit or meter. Another example is the measurement of velocity, commonly called "speed" for which the unit, called meters per second, is measured from the basic meter traveled. How to Measure To measure something usually means taking a widely accepted tool, such as a meter stick, and comparing it with the thing to be measured. Practical measuring methods abound, ranging from astronomers' methods for estimating the Earth's distance in light-years from celestial objects to micrometers and other tools used for measuring very small objects. Scientists are even finding ways to measure the sizes of atomic particles, their "spin," and other characteristics. For this discussion of measurement, the metric system will be used in all examples. If you have a meter stick at hand for reference it will be useful. If you do not have one, the commonly available foot ruler with a metric scale along one side (about 30 centimeters, each divided into tenths or millimeters) will do. Let's say that you want to measure the width of a room with a meterstick. You agree to measure it to the nearest millimeter (or 0.001 meter). If you are not familiar with measuring in the metric system, note that a millimeter is equal to roughly 1/32-inch. If the room you are measuring has a hard floor, you can make a pencil mark at the end of the meter stick each time you lay it out across the room. Or, if there is a carpet, you can stick a pin in it to mark the end of the stick. Let's say that you find there are five whole meters and a part of a meter that looks like figure 10.1 at the arrow point. This would read 5.823 meters. But one measurement is not enough if you intend to understand the problems of measurement. You should try to get an independent second measurement. To do this, have a friend make a measurement without telling him or her ahead of time what result you got. Suppose your friend measures the room using the same tools and methods and gets a result of 5.834.You now have two measurements: 5.823 and 5.834 meters. Which is right? No one knows. What if you measure it again? Good, but would that prove to be the correct measurement any more than the first two? Not likely. No one can know the "true" measurement. We must face the problem that there is no perfect way to measure it. Two people will probably not measure anything, even the width of a room, exactly alike. Even the same person is not likely to get the same measurement twice, especially if one allows enough time between measurements to forget the first measurement. Just as important, no two meter sticks or other measuring instruments are exactly alike, either. There is no such thing as a perfect measuring instrument, just as there is no perfect measurement. To get as good a measurement as is reasonably possible of the width of the room, it is necessary to take several independent measurements. Let's say you end up with five different values: (1) 5.823, (2) 5.834, (3) 5.829, (4) 5.830, and (5) 5.825. Now we have a statistical question: Which one shall we choose to be the width of the room? If we take the arithmetical average, or mean, we get 5.828 meters. This average, we see, is not anyone of the values we got by actual measurement. Is it the "correct" measurement? All we can say is that it is probably very close to it. If you were measuring the diameter of a marble with a micrometer, you would find the same problems. Several independent measurements would probably give you several different values. Again, no one would know which was the correct diameter. Once again, there would be a statistical problem in choosing a number to represent the diameter, the "true" diameter, which no one can know. Scientists have understood this uncertainty about measuring for a long time. It seems to surprise others, however. Some people react to this discovery by saying, "Well, if I can't really know what the measurement is, why bother trying to make it exact?" The reason is that scientists and others who keep working out better ways to measure things are trying to communicate better. Reporting scientific findings to others is an important part of scientific method, as you know, and communication among scientists has been helped immeasurably by better and better systems of measurement. Dealing with Uncertainty in Measurement How do we deal with this uncertainty, besides pretending it does not exist? In the most ordinary measuring of things, one reasonably careful measurement is all that is needed. Most people will settle for that, believing that they know the length of a thing, or its time, or its weight, and so forth. However, many scientific and technical workers must work as nearly as they can to the limits of the accuracy of their instruments and methods. They must also report to others what those limits of accuracy are. That is why there is need for ways to express these limits. Here is one method: Let's say that you are measuring across a piece of paper with a scale such as a ruler divided into centimeters (cm) and tenths of a centimeter (or millimeter). You want to express your measurement as centimeters to the nearest tenth, such as 23.7 cm. (See figure 10.2.) As you measure, you will see that the edge of the paper is not quite on one of the marks showing tenths, so pick the tenth mark that the edge is nearest to. That is, pick 23.7, not 23.6. (See figure 10.3.) However, you do not want anyone to think that you are claiming that the paper measured exactly 23.7 cm. Instead, you would like to tell others that you are reading the scale to within one-half of a division either way from a scale mark. (See figure 10.4.) One-half of one-tenth is 0.05 in decimal notation. This is why you put "± 0.05 cm" after your measurement figure. For example: 23.7 ± 0.05 cm (which reads, "23.7 plus or minus five-hundredths of a centimeter"). For another method of expressing uncertainty, let's go back to the measurement of the width of a room. There we made five measurements and reported the mean as 5.828 meters. If we had wanted to express the same measurement in millimeters (mm) instead of in meters, we could have reported 5,828 mm. Remember, 5,828 mm is the average of five measurements of which the shortest was 5,823 mm and the longest was 5,834 mm. This spread from lowest to highest we call the "range" of the 5 measures. We express the range as 11 mm by subtracting the shortest measurement from the longest. Our mean of 5,828 mm is 6 mm below the top of the range and 5 mm above the bottom. We may also want people to know the range when we report the mean of several measurements. This is recorded as: 5,828 ± 6 mm. Or, we could report it in its meter unit: 5.828 ± 0.006 m. This system, using the mean to express the range, is an overly simplified explanation of the method, but it does show how the system works. There is yet another way to express the uncertainty of measurements. As a simple example, let's say that you are measuring a rod with a meter stick and find that it is just under 1 meter. That is reported as 0.999 m (or 999 mm) because you wish to report that the measurement is nearer to 0.999 m than to 1.000 m (or nearer to 999 mm than to 1,000 mm). You are saying that the range is within 1 mm out of about 1,000 mm. Therefore, you report your measurement as 0.999 m with an "error" of 1 part per 1,000. The term "error" is not used in this method to mean that you made a mistake; it is another way of expressing the usual inaccuracy of any measurement. Suppose, however, that you take the same meter stick and measure a room that is close to 10 m (or 10,000 mm) across. You find that several measurements range from 9,995 to 10,005, or a net range of 10 mm. This, too, shows an error of about 10 parts in 10,000 or 1 part in 1,000, as in the previous example. To say that these measurements are accurate to 1 part in 1,000, we must assume that your meter stick is "exactly" 1 meter long. Of course, we cannot claim that because we are working on the assumption that no two things are exactly alike (including your meter stick and some master meter stick). All measuring instruments, including wooden meter sticks, do change from their original dimensions. This unstable nature of things is part of our measurement problem. The changes that come with variations in temperature, humidity, and other factors must always be expected as part of our measuring and should be allowed for in our reporting if we are trying for high accuracy. Scientists working on one of the most important measuring problems, the measurement of the speed of light, using the best equipment and methods they can devise, must still allow for the uncertainty of their measurements. Thus, they report the velocity of light in a vacuum as 299,792,456 ± 1.1 meters per second. We have discussed three different ways of reporting the uncertainty of measurements: (1) Reference to one-half of the smallest scale division; (2) Reference to a range of several values above and below the mean of those values; (3) The error in parts per 1,000, parts per million, or the like. Now that you better understand how the "pros" work with uncertainty in measurement, you may be able to handle it fairly easily while you do your own science investigations. Add your own comment - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - 10 Fun Activities for Children with Autism - Test Problems: Seven Reasons Why Standardized Tests Are Not Working - Bullying in Schools - A Teacher's Guide to Differentiating Instruction - Steps in the IEP Process
http://www.education.com/reference/article/measurement1/?page=3
13
77
II. The Earth and the Ocean Basins [PDF] Figure and Size of the Earth As a first approximation the earth may be considered as a sphere, but, according to accurate observations, its figure is more closely represented by an ellipse of rotation—that is, an oblate spheroid, the shorter axis being the axis of rotation. The figure of the earth has been defined by various empirical equations, the constants of which are based on observations and are subject to modification as the number of observations increases and their accuracy is improved. The geometrical figures defined by these equations cannot exactly represent the shape of the earth because of the asymmetrical distribution of the water and land masses. To define the position of a point on the earth's surface, a system of coordinates is needed, and as such the terms latitude, longitude, and elevation or depth are used. The first two are expressed by angular coordinates and the third is expressed by the vertical distance, stated in suitable linear units, above or below a reference level that is generally closely related to mean sea level. The latitude of any point is the angle between the local plumb line and the equatorial plane. Because the earth can be considered as having the form of a spheroid, and as the plumb line, for all practical purposes, is perpendicular to the surface of the spheroid, any plane parallel to the Equator cuts the surface of the spheroid in a circle, and all points on this circle have the same latitude. These circles are called parallels of latitude. The latitude is measured in degrees, minutes, and seconds north and south of the Equator. The linear distance corresponding to a difference of one degree of latitude would be the same everywhere upon the surface of a sphere, but on the surface of the earth the distance represented by a unit of latitude increases by about 1 per cent between the Equator and the Poles. At the Equator, 1 degree of latitude is equivalent to 110,567.2 m, and at the Poles it is 111,699.3 m. In table 1 are given the percentages of the earth's surface between different parallels of latitude. The line in which the earth's surface is intersected by a plane normal to the equatorial plane and passing through the axis of rotation is known as a meridian. The angle between two meridian planes through two |Equatorial radius, a.................... 6378.388 km| |Polar radius, b......................... 6356.912 km| |Difference (a − b)................ 21.476 km| |Area of surface......................... 510,100,934 km2| |Volume of geoid......................... 1,083,319,780,000 km3| The distance between points on the earth's surface and the area represented by a given zone cannot be correctly represented unless the size of the earth is known. The values for the equatorial and polar radii are given in table 2, with other data concerning the size of the earth that can be computed from these values. The values for the equatorial and polar radii are those for sea level. The land masses are elevations upon the geometrical figure of the earth, and the sea bottoms represent depressions. Measurements of depressions below sea level, to be strictly comparable, should be referred to the ideal sea level; that is, to a sea surface which is everywhere normal to the plumb line. In the open ocean the deviations from the ideal sea level rarely exceed 1 or 2 m. The errors that are introduced by referring soundings to the actual sea surface are insignificant in deep water, where the errors of measurement are many times greater. In coastal areas where shoal depths represent a hazard to navigation and where soundings can be made with great accuracy, the Mean low water. United States (Atlantic Coast), Argentina, Norway, Sweden. Mean lower low water. United States (Pacific Coast). Mean low water springs. Great Britain, Italy, Germany, Denmark, Brazil, Chile. Mean monthly lowest low water springs. Netherlands. Lowest low water springs. Brazil, Portugal. Indian spring low water. India, Argentina, Japan. Mean semi-annual lowest low water. Netherlands East Indies. Lowest low water. France, Spain, Norway, Greece. International low water. Argentina. The mean of the heights of low-water spring tides is known as the low water springs. International low water is 50 per cent lower, reckoned from mean sea level, than low water springs. Indian spring low water depends upon component tides found by harmonic analysis. Other terms are defined elsewhere (p. 562). The topographic features of the earth's surface can be shown in their proper relationships only upon globes that closely approximate the actual shape of the earth, but for practical purposes projections that can be printed on flat sheets must be used. It is possible to project a small portion of the earth's surface on a flat plane without appreciable distortion of the relative positions. However, for the oceans or for the surface of the earth as a whole, most types of map projections give a grossly exaggerated representation of the shape and size of certain portions of the earth's surface. The most familiar type of projection is that developed by Mercator, which represents the meridians as straight, parallel lines. Although it is satisfactory for small areas and for the lower latitudes, the size and shape of features in high latitudes are greatly distorted because the linear scale is inversely proportional to the cosine of the latitude. In the presentation of oceanographic materials, this exaggeration is most undesirable and, consequently, projections should be used on which the true shape and size of the earth's features can be more closely approximated. Numerous types of projections have been developed by cartographers. In some instances, these are geometrical projections of the surface of the geoid on a plane surface that can be flattened out, while in others the essential coordinates, the parallels of latitude, and the meridians have been constructed on certain mathematical principles. Maps and charts In order to show the oceans with the least possible distortion of size and shape, the world maps used in this volume are based on an interrupted projection developed by J. P. Goode. Comparison with a globe will show that the major outlines of the oceans are not distorted and that the margins of the oceans are clearly represented. This projection has the additional advantage of being “equal-area”; that is, that areas scaled from the map are proportional to their true areas on the surface of the earth. To show the relationships between the various parts of the oceans in high latitudes, polar projections are used, and for smaller areas Mercator and other types of projections have been employed. Distribution of Water and Land The continental land masses extend in a north-south direction, with the greatest percentage of land in the Northern Hemisphere (table 3), and there is a more or less antipodal arrangement of land- and water-covered areas. The North Polar Sea surrounding the North Pole is opposite to the continent of Antarctica, which is centered on the South Pole, and the continental land masses represented by Europe, Asia, and part of Africa are antipodal to the great oceanic area of the South Pacific. The ocean waters are continuous around Antarctica and extend northward in three large “gulfs” between the continents, on the basis of which three oceans are recognized. The Atlantic Ocean extends from Antarctica northward and includes the North Polar Sea. It is separated from the Pacific Ocean by the line forming the shortest distance from Cape Horn (70°W) to the South Shetland Islands, and the boundary between the Atlantic and the Indian Oceans is placed at the meridian of the Cape of Good Hope (20°E). The boundary between the Pacific and the Indian Oceans follows the line from the Malay Peninsula through Sumatra, Java, Timor, Australia (Cape Londonderry), and Tasmania, and follows the meridian of 147°E to Antarctica. In the north the limit between the Atlantic and the Pacific Oceans is placed in Bering Strait, which is only 58 km wide and has a maximum depth of 55 m. Unless otherwise stated, the oceans as defined above are considered to include the semi-enclosed adjacent seas that connect with them. Generally speaking, only three oceans are recognized, but it is sometimes desirable to make a further division. The waters surrounding The nomenclature applied to subdivisions of the oceans is very confused. Generic names designating certain types of features, such as sea, gulf, and bay, are used somewhat indiscriminately and hence have little physiographic significance. For example, the term sea is used in connection with inland salt lakes, such as the Caspian Sea, with relatively isolated bodies of the ocean, such as the Mediterranean Sea, with less isolated areas, such as the Caribbean Sea, and even for some areas with no land boundaries, such as the Sargasso Sea in the western North Atlantic. Several systems for naming parts of the oceans are employed in oceanographic, work. In certain instances the boundaries are selected arbitrarily by drawing straight or curved lines on the map where there are no land features which constitute natural boundaries. Such a system is followed by the International Hydrographic Bureau (1937). Wüst (1936) has suggested that the submarine ridges that are present at depths of about 4000 m be used to delimit the various parts of the oceans, and that the names now applied to the basins with depths greater than 4000 m be used to designate the areas above them. The general location of such boundaries may be seen in chart I. Oceanography is concerned not only with the form of the oceans as shown on a surface chart, but also with the distribution of properties and living organisms and the nature of the currents. Therefore, a system of nomenclature which indicates the relationships that exist in the sea would be very useful. Wüst's system, based on the ocean bottom topography, meets this purpose for the deep water but not for the upper layers. To formulate “natural regions” of the oceans, other workers, notably Schott (1926, 1935), have attempted to bring together not only geographic and topographic relationships, but also the distribution of properties and organisms, the climatic conditions, and currents. In the discussion of the distribution of organisms, fig. 220 (p. 804) shows how the oceans are subdivided upon the basis of the fauna1 distribution alone, and in the discussion of the water masses of the oceans, fig. 209 (p. 740) shows a subdivision based upon the characteristic temperature and salinity relations of the various regions. A comparison of such charts shows that, although there are certain boundaries which fall in approximately the same localities, there are many regions in which it is not possible to reconcile limits established in different ways. In table 3 are given the areas of land and water between parallels of latitude five degrees apart. For the whole earth, the ocean waters cover |Latitude (°)||Northern Hemisphere||Southern Hemisphere| |Water (106 km2)||Land (106 km2)||Water (%)||Land (%)||Water (106 km2)||Land (106 km2)||Water (%)||Land (%)| |All oceans and seas................................. 361.059 × 106 km2, 70.8%| |All land...........:.................................. 148.892 × 106km2, 29.2%| In table 4 are given the areas, volumes, and mean depths of the oceans and of certain mediterranean and marginal seas that together constitute the adjacent seas. The data are from Kossinna (1921), and in most instances the designated areas are readily recognized, but for details concerning the boundaries the original reference should be consulted. The Arctic Mediterranean includes the North Polar Sea, the waters of the Canadian Archipelago, Baffin Bay, and the Norwegian Sea, and is therefore separated from the open Atlantic by the line joining Labrador and Greenland in Davis Strait and running through Greenland, Iceland, Faeroe Islands, Scotland, and England, and across the English Relief of the Sea Floor From the oceanographic point of view the chief interest in the topography of the sea floor is that it forms the lower and lateral boundaries of water. The presence of land barriers or submarine ridges that impede a free flow of water introduces special characteristics in the pattern of circulation and in the distribution of properties and organisms. Furthermore, as will be shown in chapter XX, the nature of the sediments in any area is closely related to the surrounding topography. On the other hand, the geomorphologist or physiographer is concerned primarily with the distribution and dimensions of certain types of topographic features that occur on the submerged portion of the earth's crust. As 71 per cent of the earth's surface is water-covered, knowledge of the major features of the earth's relief will be fragmentary if based only upon those structures that can be seen on land. During the geological history of the earth which covers a span of some thousands of million years, areas now exposed above sea level have at one or more periods been covered by the sea, and parts of the now submerged surface have been above sea level. Many problems in historical geology are therefore dependent upon knowledge concerning the configuration of the sea floor surrounding the continents and the form of the deep-ocean bottom. Although valuable work in the open ocean has been carried on by scientific organizations, by far the greater proportion of our knowledge of submarine topography is based on soundings taken by or for national agencies in the preparation or improvement of navigational charts. In the United States the U. S. Coast and Geodetic Survey prepares charts for the waters bounding the United States and its possessions, and the Hydrographic Office of the U. S. Navy carries out similar work on the high seas and in foreign waters. The earlier hydrographic work was limited largely to the mapping of coast lines and to soundings in depths less than about 100 fathoms, where hazards to the safe operation of vessels might occur, but deep-sea soundings received a great impetus when surveys were made prior to the laying of the transoceanic cables in the latter part of the nineteenth century. Up to and including the time of the voyage of the Challenger, 1873–1876, all soundings were made with hemp ropes, which made the process a long and tedious undertaking, |Body||Area (106 km2)||Volume (106 km3)||Mean depth (m)| |Atlantic Ocean excluding adjacent seas||82.441||323.613||3926| |Pacific Ocean excluding adjacent seas||165.246||707.555||4282| |Indian Ocean excluding adjacent seas||73.443||291.030||3963| |All oceans (excluding adjacent seas)||321.130||1322.198||4117| |Mediterranean Sea and Black Sea||2.966||4.238||1429| |Large mediterranean seas||29.518||40.664||1378| |Small mediterranean seas||2.331||0.402||172| |All mediterranean seas||31.849||41.066||1289| |Gulf of St. Lawrence||0.238||0.030||127| |East China Sea||1.249||0.235||188| |Gulf of California||0.162||0.132||813| |All adjacent seas||39.928||48.125||1205| |Pacific Ocean, including adjacent seas||179.679||723.699||4028| |All oceans (including adjacent seas)||361.059||1370.323||3795| Because of their practical importance and the ease with which they could be obtained, the number of soundings in depths less than a few hundred meters accumulated rapidly during the nineteenth century, but in 1895 there existed only 7000 soundings from depths greater than about 2000 m, and of these only about 550 were from depths greater than 5500 m (Bencker, 1930). These data were used by Murray in preparing the bathymetric charts accompanying the reports of the Challenger Expedition. During the next twenty-five years the number of deep-sea soundings increased slowly, but the introduction of sonic-sounding equipment after 1920 has completely changed the picture. Devices for measuring the depth by timing the interval for a sound impulse to travel to the sea bottom and back again (only a few seconds even in deep water) are used in surveying work and are now standard equipment on many coastwise and oceanic vessels. The development of automatic echo-sounding devices (chapter X) not only made depth measurements simple but, by making accurate bathymetric charts available, introduced another aid in navigation, since passage over irregularities of the sea floor may be used to check positions. This development has necessitated extending accurate surveys into deeper water and, hence, farther from shore. Along the coasts of the United States the bottom is now being charted in detail to depths of about 4000 m. With sonic methods, if the appropriate apparatus is available, it is no more trouble to sound in great depths than it is in shoal waters, and, since many naval vessels and transoceanic commercial vessels make systematic records of their observations, the soundings in the deep sea are now accumulating more rapidly than they can be plotted. The most common method of representing submarine topography is to enter upon a chart showing the coast lines the numerical values of the soundings at the localities in which they were obtained. Charts issued by the national hydrographic services of the English-speaking countries give depths in fathoms or, if harbor charts, in feet (1 fathom = 6 ft = 1.8288 m). Those issued by other countries generally use meters, although still other units are employed by certain European countries. Because it is generally impossible to enter all soundings, and as numerical values alone do not give any graphic representation of the topography, contours of equal depths (isobaths) are drawn in those regions in which the number of soundings or the purpose of the chart makes it desirable. On navigational charts, contours are generally restricted to shallow areas where soundings are also shown, but, for certain regions that have been carefully examined, charts are now issued with contours entered to depths as great as 2000 fathoms (for example, U. S. The accuracy with which submarine topography can be portrayed depends upon the number of soundings available and upon the accuracy with which the positions of the soundings were determined. Topographic maps of land surfaces are based on essentially similar data; namely, elevations of accurately located points, but the surveyor has one great advantage over the hydrographer. The surveyor is able to see the area under examination and thereby distribute his observation points in such a manner that the more essential features of the topography are accurately portrayed. The hydrographer, on the other hand, must construct the topography of the sea floor from a number of more or less random soundings. Sonic sounding methods and the introduction of more accurate means of locating positions at sea (see Veatch and Smith, 1939) have made it feasible to obtain adequate data for constructing moderately accurate charts or models of parts of the sea floor. This is particularly true of the coastal waters of the United States. Veatch and Smith have prepared contour maps of the eastern seaboard based on the investigations of the U. S. Coast and Geodetic Survey, and Shepard and Emery (1941) have made use of similar data from the Pacific Coast, where over 1,300,000 soundings were available. In some instances it is preferable to represent the bottom configuration by vertical profiles or by relief models, but, because of the difference in magnitude of the vertical and horizontal dimensions of the oceans, it is generally necessary to exaggerate the vertical scale. The average depth of the ocean is about 3800 m, and the vertical relief of the ocean floor is therefore of the order of a few kilometers, whereas the horizontal distances may be of the order of thousands of kilometers. Hence such distorted representations give a false impression of the steepness of submarine slopes. If profiles are drawn to natural scale, the ocean waters form a shallow band with barely perceptible undulations of the bottom. Examples of undistorted profiles are given by Johnstone (1928). In fig. 1 are shown two representations of a profile of the sea bottom in the South Atlantic based on the observations of the Meteor (Stocks and Wüst, 1935). The upper section (A) is constructed from thirteen wire soundings, and is comparable in detail to most of the profiles that could be prepared before the introduction of sonic methods. The lower section (B) is based upon over 1300 sonic soundings that were taken by the Meteor along the same route, shown in the map at the bottom part of the figure (C), where the depth contours are from chart I. The increasing complexity of the known topography of the sea bottom resulting The water surface coincides, for all practical purposes, with the surface of the geoid, and the sea bottom, if “flat,” would be parallel to the sea surface. Irregularities of the sea floor therefore represent departures from this surface, which is convex outward. Only in small features with steep slopes are depressions actually concave outward. Bottom topography in the South Atlantic Ocean. (A) Profile of the bottom between the South Shetland Islands and Bouvet Island based on 13 wire soundings. (B) Profile over the same course constructed from over 1300 sonic soundings (Meteor). (C) Bottom configuration as shown in Chart I and the track of the Meteor. Vertical exaggeration in (A) and (B) about 200:1. (In part, after Stocks and Wüst, 1935.) The greatest depths so far discovered are in the Pacific Ocean, where, in the Philippines Trench and the Japan Trench, soundings greater than 10,000 m have been obtained. In the Philippines Trench the German vessel Emden obtained a sonic sounding of 10,540 m, which, however, is considered to be about 200 m too great. The Dutch vessel Willebrord Representations of submarine topography are usually referred to sea level, and particular interest has always been attached to those regions in which great depths are found. The greater detail with which the sea floor can now be mapped has emphasized the importance of relative relief; that is, the form and magnitude of elevations or depressions with respect to their general surroundings. In later pages it will be shown that there are two primary levels of reference on the earth's crust, one slightly above sea level, corresponding to the land masses, and a second at depths between 4000 and 5000 m, corresponding to the great oceanic basins. In comparing topographic features on land with those on the sea floor it is essential to consider them with reference to these levels. Hypsographic curve showing the area of the earth's solid surface above any given level of elevation or depth. At the left in the figure is the frequency distribution of elevations and depths for 1000-meter intervals. One method of presenting the character of the relief of the earth's crust is by means of a hypsographic curve showing the area of the earth's solid surface above any given contour of elevation or depth. The hypsographic curve in fig. 2 is from Kossinna (1921). Although added data The hypsographic curve of the earth's crust should not be interpreted as an average profile of the land surface and sea bottom, because it represents merely the summation of areas between certain levels without respect to their location or to the relation of elevations and depressions. Actually, the highest mountains are commonly near the continental coasts, large areas of low-lying land are located in the central parts of the continents, and the greatest depths are found near the continental masses, and not in the middle of the oceanic depressions. Entered in fig. 2 are the percentages of elevations and depressions for 1000-m intervals. These show two maxima, one just above sea level and a second between depths of 4000 and 5000 m. The significance of these maxima is discussed later (p. 23). In table 5 are given the percentage areas of the depth zones in the three oceans, and for all oceans with and without adjacent seas. It will be noted that the shelf (0–200 m) represents a prominent feature in the Atlantic Ocean, which is also the shallowest of the oceans. By combining data in tables 4 and 5 the absolute areas of the depth zones may be computed. The hypsographic curve in fig. 2 is based on the values for all oceans, including adjacent seas. During the geological history of the earth, great changes have occurred in the relief of the land and sea bottom. The exact nature and extent of these vertical movements is beyond the scope of the present discussion, but it should be noted that changes in relative sea level of the order of 100 m, which are readily accounted for by the withdrawal and addition of water during glacial and interglacial periods, would expose and inundate relatively large areas. The continental shelf is generally considered to extend to depths of 100 fathoms, or 200 m, but Shepard (1939) found that the limit should be somewhat less than this; namely, between 60 and 80 fathoms (110 and |Depth interval (m)||Including adjacent seas||Excluding adjacent seas| |Atlantic||Pacific||Indian||All oceans||Atlantic||Pacific||Indian||All oceans| From the above values it may be seen that the average slope of the shelf is of the order of 2 fathoms per mile, or 0.2 per cent. This corresponds to a slope angle of about 7ʹ. Although there is a general seaward slope of the shelf, it is by no means an even-graded profile. As mentioned above, there may be terraces, ridges, hills, and depressions, and in many areas there are steep-walled canyons cutting across it. Shelf irregularities are most conspicuous off glaciated coasts, and were caused by the ice during a glacial period when this zone was exposed to glacial erosion (Shepard, 1931). On land the slope is often more significant than the absolute range in elevation. According to Littlehales (1932) the smallest slope that the human eye can detect is 17ʹ. Therefore, except for the minor irregularities, the continental shelf would in general appear flat. From an examination of 500 profiles, Shepard (1941) found that the inclination of the continental slope varied with the character of the coast. Continental slopes off mountainous coasts have, on the average, a slope of about 6 per cent (3°30ʹ), whereas off coasts with wide, well-drained coastal plains the slopes are about 3.5 per cent (2°0ʹ). The submerged slopes of volcanic islands are similar to the exposed slopes of volcanic mountains, and may be as great as 50° (Kuenen, 1935). In large submarine canyons the walls are as rugged and precipitous as those of the Grand Canyon of Arizona (fig. 8, p. 40). Fault scarps above and below sea level show comparable slopes. The average slopes of the deep-sea floor are small. Krümmel (Littlehales, 1932) found that in the North Atlantic the mean slopes varied between about 20ʹ and 40ʹ, but these are averages, or were obtained by dividing the difference in elevation by the distance between two points. Where the distances are great or when the number of soundings is small, the slopes obtained in this way do not give a true representation of the relief. The increased data now available have revealed irregularities comparable in ruggedness to the larger topographic features on land. Major Features of Topography The discussion of the bottom topography of the oceans will be restricted to a brief consideration of the large-scale topographic features that are represented on small charts with large contour intervals. In regions where many soundings have been obtained, it has been found that the sea bottom may be virtually as irregular as the land surface, but such details can be shown only on large-scale charts with small contour intervals, and are not included in this volume. Submarine geology is concerned with the topography of the sea floor, the composition and physical character of the sedimentary and igneous materials that are found on the ocean bottom, and the processes involved in the development of topographic relief. The field is a relatively new one which has received great impetus from the development of sonic sounding methods that made it possible to obtain accurate maps of the sea floor, and from the development of geophysical methods (measurement and interpretation of gravity anomalies, of the earth's magnetic field, and of the path and velocity of earthquake and artificial seismic waves) that yielded estimates of the character and thickness of the materials forming the crust of the earth. However, there is yet no agreement concerning the processes involved in the geological history of the ocean basins, and the various hypotheses will not be discussed here. General reviews of the problems will be found in Johnstone (1928), Bucher (1933), Kuenen (1935), and Gutenberg (1939). A symposium on the geophysical exploration of the sea bottom (Field et al, 1938) covers many of the developments. The distribution of elevations and depressions on the earth's crust (fig. 2) shows that there are large portions with elevations between sea level and 1000 m, and with depths between 4000 m and 5000 m. According to Bucher (1933) the larger, lower ones are related to the character of the earth's crust, while the upper ones are the result of subaerial erosion and sedimentation. The question then arises as to the extent to which the topography of the ocean bottom with reference to a depth of about 4500 m corresponds to that of the land with reference to sea level or a slightly higher level. According to Bucher, the large-scale features are essentially similar, and elevations and depressions of comparable dimensions are found both on land and on the ocean bottom. Although the major features are comparable, the details are quite different, because erosion, which plays such an all-important role in the creation of sharp relief and in the ultimate destruction of elevations on land, is virtually absent in the sea. In the sea the most effective agents of erosion are the surface waves, and these tend to produce flat-topped features that are restricted to shallow depths, since the velocity of the water particles in such waves decreases rapidly with increasing depth (p. 528). Other processes which may contribute to erosion of the sea floor are discussed in chapter XX and in the section dealing with the origin of submarine canyons (p. 41). Deposition is the characteristic process that modifies the topography of the sea bottom. Sedimentary debris accumulates in depressions, while there is little or no accumulation on topographic highs, which are devoid of fine-grained sediment and are subject to erosion if near the surface or in localities of exceptionally strong currents. Bucher (1933) has stated that there are essentially two types of large-scale topographic features on the land and on the ocean bottom: (1) those of approximately equidimensional lateral extent, to which he applies the names swells and basins, and (2) those of elongate form, generally with steeper sides, to which he applies the names welts and furrows. On the ocean bottom the elongate welts and furrows appear to be the more common, and there is a considerable range in the size of such structures. There is a tendency for the large welts on the sea bottom to be parallel to the continental coasts, so that the oceans are divided into elongate troughs. Transverse ridges in turn subdivide these major depressions into a series of basins that are separated from one another to a greater or lesser degree. This ridge and basin topography is clearly shown by the bottom of the Atlantic Ocean and the Indian Ocean and in the western part of the Pacific Ocean, but does not appear to be so conspicuous a feature in the main part of the Pacific. Within the smaller welts and furrows, the steepest slopes, the highest elevations, and the greatest depths are found. The welts and furrows are commonly close together, with arced outlines, and are characteristically found near the continents. The deep furrows are generally on the convex Terminology of Submarine Topography The terms applied to features of submarine topography will be classified according to the origin of the features rather than according to their size, although the latter procedure is the common one (for example, Niblack, 1928, Littlehales, 1932). The features of submarine relief may be grouped in two main categories, depending upon whether they have gained their characteristic form through diastrophic activity (crustal movements) or through erosion or deposition. The primary large-scale process involved in the development of relief must be diastrophic, but in many cases the characteristic feature is produced by erosion or deposition. No distinction will be made here between features that have been formed below the sea surface and those that may possibly owe their origin to subaerial erosion or deposition. As pointed out before, deposition in the sea tends to fill in the depressions and thus to level out the minor irregularities of the bottom, and, with the exception of those cases in which organisms play an important role (for example, in the formation of coral reefs), little or no deposition takes place on topographic highs. There has been much discussion as to the processes that have led to the formation of the continental and insular shelves. Some authors maintain that they are wave-built (depositional); others consider that they are wave-cut (erosional), or that they are a combination of both processes (Johnson, 1919; Shepard, 1939). Geophysical studies on the two sides of the North Atlantic (Bucher, 1940) indicate that the shelves are composed of great prism-shaped accumulations of sedimentary rock that at the outer edge of the shelf bordering the eastern United States are 4000 m thick. To what extent these features resulted from the slow accumulation and sinking of the crust and to what extent violent diastrophic movements have been involved has not yet been decided. The characteristic form of the shelf and of isolated flat-topped banks and shoals, and other features of the shallow bottom indicate that wave erosion and transportation by currents have played an important part The terms used to designate certain types of topographic features, their French and German equivalents, and their definitions, which are given below, correspond to those suggested by the International Hydrographic Bureau (Niblack, 1928). Unfortunately, there is still considerable confusion in the use of certain terms, particularly those which apply to the larger features of the topography. Sometimes several different descriptive terms have been applied to the same structure, and in other instances the same term is applied to features of vastly different size and probably of different origin. A committee of the International Association of Physical Oceanography (Vaughan et al, 1940) attempted to clarify many of the problems relating to the terminology, but much confusion still prevails. In order to designate any individual feature, the descriptive term is prefixed by a specific name. The specific names attached to large-scale features are generally geographical, but those assigned to such features as banks, shoals, seamounts, canyons, and sometimes deeps are often those of vessels or individuals associated with their discovery or mapping. Features Resulting from Crustal Deformation Elevations. The large-scale elevations of the ocean bottom are termed ridges, rises, or swells. Ridge (F, Dorsale; G, Rücken). A long and narrow elevation with sides steeper than those of a rise. Rise (F, Seuil; G, Schwelle). A long and broad elevation which rises gently from the ocean bottom. Isolated mountain-like structures rising from the ocean bottom are known as seamounts. Where the ridges are curved, and particularly if parts of them rise above sea level, they are sometimes termed arcs. The broad top of a rise is termed a plateau. The expression sill is applied to a submerged elevation separating two basins. The sill depth is the greatest depth at which there is free, horizontal communication between the basins. Depressions. The terms trough, trench, and basin are those most commonly applied to the large-scale depressions on the ocean bottom. Trough (F, Dépression; G, Mulde). A long, broad depression with gently sloping sides. Trench (F, Fossé; G, Graben). A long and narrow depression with relatively steep sides. Basin (F, Bassin; G, Becken). A large depression of more or less circular or oval form. The terms defined above are used rather loosely and are applied to features of a wide range in size. For those parts of a depression which exceed 6000 m in depth, the term deep (F, Fosse; G, Tief) is used. As originally suggested by Murray, the term designated areas where the depths exceeded 3000 fathoms (5486 m), but it is now generally restricted to those depressions of greater depth (Vaughan et al, 1940). The term depth (F, Profondeur; G, Tiefe), prefixed by the name of the vessel concerned, may be used to designate the greatest sounding obtained in any given deep. Features Resulting from Erosion, Deposition, and Biological Activity As pointed out above, the features in this category have been produced by erosion of, or deposition upon, structures which may be primarily of diastrophic origin. The most prominent types of features in this group are the shelf and the slope. Shelf. The zone extending from the line of permanent immersion to the depth, usually about 120 m, where there is a marked or rather steep descent toward the great depths. Continental Shelf (F, Plateau continental; G, Kontinental-Schelff) is applied to the feature bordering the continents, while Insular Shelf (F, Socle; G, Insel-schelff) is used for the feature surrounding islands. Slope. The declivity from the outer edge of the shelf into deeper water. Continental Slope (F, Talus continental; G, Kontinental-Abfall) and Insular Slope (F, Talus insulaire; G, Inselabfall) are applied to the slopes bordering continents or islands. The following terms are applied to the upper parts of elevations which show the effects of erosion or deposition. Bank (F, Banc; G, Bank). A more or less flat-topped elevation over which the depth of water is relatively small, but which is sufficient for surface navigation. Shoal (F, Haut-fond; G, Untiefe or Sandgrund). A detached elevation with such depths that it is a danger to surface navigation and which is not composed of rock or coral. Reef (F, Récif; G, Riff). A rocky or coral elevation (generally elongate) which is dangerous to surface navigation. It may extend above the surface. A variety of names has been applied to the steep-walled fissures that penetrate the slope and cut across the shelf. The most commonly used terms are canyon and valley, but gully, gorge, and mock-valley are also applied to these features. In addition to the terms given above, many expressions are employed in descriptions of submarine topography with the same meanings that they have when used for land topography. Bottom Configuration of the Oceans The major features of the topography of the ocean bottom are of such large dimensions that they are readily shown on a chart with contour intervals 1000 m apart. Such a representation is given in chart I, where the contours are entered for 1000-m intervals between 3000 m and 7000 m. The areas with depths less than 3000 m represent a rather small part of the sea floor, and the complex nature of the contours for depths less than this would confuse rather than add to the value of a chart of this kind. The topography is based upon the most recent charts available, and primarily upon the bathymetric chart prepared by the International Hydrographic Bureau in 1939 (Vaughan et al, 1940). Other sources that may be consulted for details concerning the configuration of the ocean floor are listed on page 29. It will be noted that the complexity of the topography varies in different regions. This difference must be attributed, in part, at least, to the variable amount of data available, because in those regions where the soundings are widely spaced the contours will be smooth and rounded, whereas in those areas where there are numerous soundings the contours are more complex and irregular. The Atlantic Ocean, the central part of the North Pacific Ocean, the Northern Indian Ocean, and the area surrounding Antarctica are fairly well sounded, but in many other regions, such as the North Polar Sea and the Southern Indian and South Pacific Oceans, the observations are very sparse. The increase in the complexity of the known topographic features that follows the accumulation of more depth measurements can be seen by comparing recent bathymetric charts with those published in the early years of the present century. The status of bathymetric knowledge in 1937 is shown by a series of charts in Vaughan et al (1937). As stated above, the topography of the ocean bottom is characterized by depressions and elongated ridges. Some of these features are of very A longitudinal ridge, the Indian Ridge, is present in the Indian Ocean and extends from India to Antarctica, but differs from the one in the Atlantic Ocean in that it is wider and does not extend so near the surface. In the Pacific Ocean the longitudinal elevations are not so conspicuous; however, the West Pacific Ridge, which is actually composed of several shorter ridges, can be traced from Japan to Antarctica, and is continuous at depths less than 4000 m except for breaks at 11°N, 10°S, and 53°S. A second elevation extends from Central America to the south and west, reaching Antarctica in the longitude of New Zealand. This East Pacific Ridge is continuous at depths less than 4000 m and separates the central depression from the deep basins bordering Central and South America and the Pacific Antarctic Basin. The effect of these major elevations on the distribution of bottom-water temperatures is shown in fig. 211, p. 749. Within the major depressions or troughs which are bordered by the continents and the longitudinal ridges are transverse ridges that separate to a greater or lesser degree a number of basins. Wüst (Vaughan et al, 1940) has suggested that the 4000-m contour be used as the boundary in designating basins, but this is a purely arbitrary delimitation that places undue emphasis upon the absolute depth rather than upon the relative relief, which in many instances is of greater significance. For example, the Mediterranean Sea Basin is virtually excluded from such a classification, although it is a deep, isolated basin, much of it extending more than 3000 m below the sill in the Strait of Gibraltar. In the tabulation accompanying chart I are listed the names for the major parts of the oceanic depressions which Wüst has termed basins; namely, those parts which have depths exceeding 4000 m. Certain individual basins are clearly In the tabulation of the basins given on chart I are listed some of the more prominent deeps; namely, those features where the depths exceed 6000 m. Some deeps are located more or less centrally in the large basins; for example, Wharton Deep, Byrd Deep, and the numerous deeps in the central part of the North Pacific, but these rarely exceed 7000 m in depth. On the other hand, numerous deeps of elongate character are located near and parallel to continental coasts, island arcs, or submarine ridges which correspond to the furrows discussed on p. 23. These marginal deeps, to which the term trench or sometimes trough is applied, are the features within which the greatest depths are found, in nearly all cases exceeding 8000 m. Only one such trench is found in the Indian Ocean; namely, the Sunda Trench. In the Atlantic are to be found the Romanche Trench, the South Sandwich Trench, and the Puerto Rico and Cayman Troughs. The greatest number are in the western part of the Pacific Ocean, although there is a chain of such features paralleling the mountainous coast of parts of Central and South America. As stated before, the regions in which these deep trenches occur are sites of volcanic and seismic activity. The complex topography of the East Indian Archipelago, which has been described by Kuenen (1935), is shown schematically in fig. 208, p. 736. For a detailed description of the features of the sea bottom the reader should consult Littlehales (1932). Vaughan (1938) has described the topography of the Southern Hemisphere. There is much information of value in the large report by Vaughan and others (1940), which also contains the small-scale bathymetric chart prepared by the International Hydrographic Bureau on a Mercator projection, a special chart of the North Pacific prepared by the U. S. Hydrographic Office, and an excellent, detailed chart of the Caribbean Sea region prepared by the same agency. The standard charts on the bathymetry of the oceans are those in the series known as the Carte Générale Bathymétrique des Océans, published by the International Hydrographic Bureau at Monaco. These charts comprise twenty-four sheets which are revised from time to time Polar projection of the Arctic regions showing the generalized topography of the sea bottom. (Cherevichny's soundings of 1941 not included.) Bottom Configuration of the Arctic and Antarctic Regions Fig. 3 has been prepared to show the submarine topography of the North Polar regions, which cannot be properly visualized from the interrupted projection used in chart I. The figure is based on a chart by Stocks (1938) and incorporates all of the available data. Because of the larger scale it is possible to show the contours for the shallower depths that form such a large part of this area. The conspicuous topographic Very little is known of the topography of the North Polar Basin, and the form of the contours is largely hypothetical. Soundings greater than 3000 m are fairly numerous to the north of Europe, and there are some to the north of Alaska. A line of soundings also extends from the Pole and parallels the east coast of Greenland. These soundings were obtained by the Russian expedition which landed on the ice from planes and, in 1937–1938, drifted with the pack ice until picked up off the east coast of Greenland. Within 100 km of the Pole, this party obtained a sounding of 4300 m. The 5000-m contour is inserted on the basis of a single sounding of 5440 m obtained in 1927 by Sir Hubert Wilkins, who flew out by plane from Alaska, landed on the ice, and measured the depth with a portable sonic sounding instrument. The correctness of this sounding appears doubtful, however. In April, 1941, the Russian aviator Cherevichny, who landed on the ice in three different localities to the north of Wrangel Island and spent from three and a half to six days in each place, obtained much smaller depths (unpublished data communicated through the American Russian Institute, San Francisco, California). Cherevichny's soundings are as follows: |Latitude, north||Longitude||Depth, m| |78 30||176 40 E||1856| |80 00||170 00 W||3430| These soundings were not available when the bathymetric chart of the Arctic region (fig. 3) was prepared. The more or less elliptical North Polar Basin is connected with the Norwegian Basin by a fairly deep channel between Greenland and Spitsbergen, in which the sill depth is about 1500 m (table 6). The Norwegian Basin, in which there are two depressions with depths exceeding 3000 m, is separated from the open Atlantic by a ridge extending from Greenland to Scotland, from which Iceland and the Faeroe Islands rise above sea level. The sill depths in Denmark Strait, between Greenland and Iceland, and over the Wyville Thomson Ridge, between the Faeroes and Scotland, are about 500 m. Rising from the floor of the Norwegian Basin is an isolated elevation that extends above the surface as Jan Mayen Island. Another depression of considerable magnitude that does not appear on chart I is in Baffin Basin between Baffin Island and Greenland, where the depths exceed 2000 m. This basin is separated from the open Atlantic by a ridge in Davis Strait between Baffin Island and Greenland, where the sill depth is about 700 m. Interesting topographic features which are well developed in the Arctic regions are the “troughs” that cut across the shelf. These U-shaped furrows were apparently cut by glaciers at a period when the sea surface stood at a lower level. One such trough extends around the southern tip of Norway, and others may be traced by the irregularities of the 200-m contour to the north of Russia and between the islands of the Canadian Archipelago. Nansen (1928), in a discussion of the topography of the North Polar Basin, has described these features. Polar projection showing the generalized sea-bottom topography of the Antarctic regions. Depths less than 4000 m shaded. Heavy dotted lines show the location of the elevations which separate the various basins. Contours at depths of 3000 m and more Correspond to those in chart I. Fig. 4 is a polar projection of the Antarctic regions which shows the relationships between the major features of the submarine topography that cannot be visualized from chart I. The topography is based on the same data used for the preparation of chart I supplemented from other sources. All major depressions are also shown in the world map, but it has been possible to enter in this figure the contours above 3000 m. There are many striking differences between the topography of the North The deep basins extend close to the continent of Antarctica, and the slopes are relatively steep. Joining South America to Antarctica is the South Antilles Arc, upon which are located South Georgia, South Sandwich, South Orkney, and South Shetland Islands. The ridge is continuous at 4000 m, and at 3000 m there are only relatively narrow openings. The Atlantic Ridge does not extend as far south as Antarctica, but is terminated in the vicinity of Bouvet Island. The ridge to the south of Africa and Madagascar is known as the Crozet Ridge, after the island of that name that rises from it. Forming a part of the Indian Ridge is the conspicuous elevation surrounding Kerguelen Island known as the Kerguelen Ridge. The ridge extending from Australia to Antarctica supports Macquarie Island and is known as the Macquarie Ridge. The importance of these ridges in determining the distribution of properties and the character of the circulation around Antarctica is discussed in chapter XV, p. 610 et seq. The greatest depths found in the region shown in fig. 4 are in the Byrd Deep to the south of New Zealand and in the South Sandwich Trench on the convex side of the South Antilles Arc. Bottom Configuration of Adjacent Seas It is beyond the scope of this volume to present charts or descriptions of the many marginal and adjacent seas. In table 4 are listed the area, volume, and mean depth of some of these features. The adjacent seas of the Arctic regions are shown in fig. 3, and in figs. 5 and 6 are shown the generalized topographies of the European and American Mediterraneans. Details of the topography of other marginal areas are presented elsewhere. The degree of isolation—namely, the extent to which free exchange of water with the adjacent ocean is impeded by the presence of land or submarine barriers—plays an important role in determining the characteristic distribution of properties in such regions (see chapters IV and XV). The European Mediterranean, which comprises the Mediterranean Sea, the Black Sea, and the waters connecting them (namely, the Dardanelles, the Sea of Marmora, and the Bosporus) forms an intercontinental sea bordered by Europe, Asia, and Africa. The Mediterranean Sea occupies a deep, elongated, irregular depression with an east-west trend, and the Black Sea occupies a small and topographically simpler depression offset to the north. The Black Sea Basin, with depths exceeding 2200 m, is virtually isolated from the Mediterranean Sea proper, the connection Generalized bottom topography of the European Mediterranean. The larger basins are (I) Algiers-Provençal, (II) Tyrrhenian, (III) Ionian, (IV) Levantine, and (V) Black Sea Basin. The generalized topography of the European Mediterranean is shown in fig. 5, which is based on a chart prepared by Stocks (1938). The Black Sea Basin (V) is of more or less elliptical form except in the north, where there are irregular shallow seas of which the largest is the Sea of Azov, east of the Crimean Peninsula. The connection with the Mediterranean Sea is through the Bosporus, the Sea of Marmora, and the Dardanelles into the Aegean Sea, where the irregular topography is reflected in the large number of islands. The Mediterranean Sea Basin is subdivided by a series of transverse ridges with a north-south trend, parts of which extend above sea level. The primary division into the western and eastern depressions is effected by a ridge extending from Europe to Africa—namely, Italy, Sicily, and the submerged part of the elevation between these land areas and Africa. The sill depth in the strait between Sicily and Tunis is about 400 m. The Western Mediterranean, in turn, is subdivided into the Algiers-Provençal Basin (I) and the Tyrrhenian Basin (II) by the ridge extending from northwestern Italy to Tunis, from which Corsica and Sardinia rise above the sea surface. The Eastern Mediterranean is subdivided into two major depressions: the Ionian Basin (III), in which maximum depths of 4600 m are found, Generalized bottom topography of the American Mediterranean. The larger basins are (I) Mexico Basin; (II) Cayman Basin and (III) Cayman Trough, in the Western Caribbean; (IV) Colombia Basin and (V) Venezuela Basin, in the Eastern Caribbean. The greatest known depth in the Atlantic Ocean, 8750 m, is located in the Puerto Rico Trough to the north of Puerto Rico. The American Mediterranean encompasses the partially isolated basins of the wide gulf bordered by North, Central, and South America which are separated from the open Atlantic by ridges, parts of which rise above sea level. The generalized topography of the region is shown in fig. 6, which is based on a chart by Stocks (1938). The chief difference between the European Mediterranean and the American Mediterranean is that the latter has numerous shallow and several deep connections with the open Atlantic. The topography of the American Mediterranean is extremely rugged, with deep trenches adjacent to steep-sided ridges, many of which rise above sea level. This is particularly true in the central and southern parts of the region, which are areas of pronounced gravity anomaly, volcanism, and strong seismic disturbances (Field et al, 1938). Bordering the low-lying coast of the Gulf of Mexico, off part of Honduras and Nicaragua, and surrounding the Bahama Islands are extensive shelves. The slopes leading down to deep water are in general rather steep, particularly between Cuba and Jamaica and along the The American Mediterranean is subdivided into two major depressions, the Gulf of Mexico and the Caribbean Sea, by a ridge between Yucatan and Cuba, and by the island of Cuba. The sill depth in the Yucatan Channel is less than 1600 m. The Mexico Basin (I) is a relatively simple depression lacking the irregularities that characterize the topography of the Caribbean region. Maximum depths of nearly 4000 m are found in the western part of the basin. The Caribbean region is separated into two major basins, the Western and the Eastern Caribbean, by the Jamaica Rise, which extends from Honduras to Hispaniola and from which Jamaica rises above the surface. The Western Caribbean is in turn divided into Yucatan Basin (II) and the Cayman Trough (III) by the Cayman Ridge, which extends westward from the southern extremity of Cuba. The Cayman Trough is the deepest depression in the American Mediterranean, and within the Bartlett Deep to the south of Cuba the U.S.S. S-21 obtained a maximum sounding of 7200 m. The Windward Passage between Cuba and Hispaniola appears to be a continuation of the depression forming the Cayman Trough. The greatest saddle depth between the Western and Eastern Carribbean is located in the passage between Jamaica and Hispaniola, where it is about 1200 m. The Eastern Caribbean is partially divided into two basins by the Beata Ridge, which extends south and west from Hispaniola toward South America. The western portion of the depression is known as the Colombia Basin (IV) and the eastern as the Venezuela Basin (V). The Aves Swell separates a small basin with depths greater than 3000 m in the eastern part of the Venezuela Basin, which is called the Grenada Trough. The terminology to be applied to the features of the American Mediterranean is discussed by Vaughan in the report by Vaughan et al (1940), which also includes an excellent bathymetric chart of the Caribbean region prepared by the U. S. Hydrographic Office. The currents and distribution of properties in this area are described in chapter XV, p. 637. |Basin||Max. depth (m)||Adjacent deep depression||Surface feature||Location of sill||Sill depth (m)||Max. depth — sill depth (m)| |Arctic Mediterranean Region| |North Polar Basin||5400||North Pacific||Bering Strait||Siberia-Alaska||55||....| |Norwegian Basin||3700||North Atlantic||Denmark Strait||Greenland-Iceland||500||3200| |North Atlantic||Faeroe Is.-Scotland||500||3200| |Baffin Basin||2200||North Atlantic||Davis Strait||Baffin Is.-Greenland||700||1500| |European Mediterranean Region| |Western Mediterranean Basin||3700||North Atlantic||Strait of Gibraltar||Gibraltar-Morocco||320||3400| |Eastern Mediterranean Basin||4600||Western Mediter-ranean||Sicily-Tunis||400||4200| |Black Sea Basin||2200||Eastern Mediter-ranean||Bosporus||40| |American Mediterranean Region| |Eastern Caribbean Basin||5500||North Atlantic||Anegada and Jungfern Passages||Virgin Is.-Lesser Antilles||1600||3900| |Western Caribbean Basin||7200||North Atlantic||Windward Passage||Cuba-Hispaniola||1600||5600| |Eastern Caribbean||Jamaica Channel||Jamaica-Hispaniola||1200||....| |Mexico Basin||3900||Western Caribbean||Yucatan Channel||Yucatan-Cuba||1600||2300| |North Atlantic||Strait of Florida||Florida-Bahama Is||800||....| |Japan Sea Basin||3700||Philippines Basin||Tsushima Strait||Korea-Japan||150||3550| |Red Sea Basin||2800||Indian Ocean||Strait of Bab-el-Mandeb||Somaliland-Arabia||100||2700| |Baltic Sea Basin||300||North Atlantic||Danish Sounds||Danish Is.-Germany||20||280| Isolated basins are of great interest from an oceanographic point of view, and in table 6 are brought together some of the data relating to the larger basins in adjacent seas. This tabulation does not include the basins in the East Indian Archipelago, which are discussed elsewhere (table 87, p. 738). The maximum depth within each basin and the greatest sill depth at which there is horizontal communication with the adjacent basins are listed as well as the difference between the greatest depth in the basin and the sill depth. The latter value corresponds to the depth of the “lake” that would be formed if the water level were lowered to the greatest sill depth. It will be seen that most of the basins listed are without horizontal communication through vertical distances of 3000 to 4000 m, and that in the Yucatan Basin the greatest depth is 5600 m below the sill. In great contrast to these deep basins is the Baltic Sea (average depth, 55 m), where depths greater than 300 m are restricted to small, isolated depressions and where the sill depth is only 20 m. In fig. 7 is shown the topography of the area off the coast of Southern California. This coastal area is one of considerable interest in that it is physiographically similar to the adjacent land area and apparently represents a down-warped portion of the continent. The continental shelf is relatively narrow, and offshore is a series of basins and ridges upon which several islands are located. In the southern part the real continental slope leading down to the oceanic abyss is approximately 150 miles from the coast. This is not shown in the map. Several small canyons are also depicted in the figure. For many years it has been known that there were furrows cutting across the shelf in certain regions, but only since it became possible to obtain large numbers of accurately located soundings of the shelf and slope were such features found to be numerous and widespread. Variously termed canyons, valleys, mock-valleys, and gullies by different authors, they have stimulated a great deal of interest among geologists, and a large literature has been built up dealing with the character and mode of formation of these canyons. The data concerning the topography of the canyons have largely been obtained by national agencies engaged in the careful mapping of nearshore areas. Such data have been used by Veatch and Smith (1939) and by Shepard and Emery (1941) to prepare general and detailed topographic charts of the canyons off the east and west coasts of the United States. Stetson (1936) has carried out independent observations on the east coast, and Shepard Although the terms listed above have been used more or less synonymously, the size and general character of the canyons vary greatly. Some of them off the mouths of rivers, such as the Hudson (fig. 9), Congo, and Indus Canyons, have depressions that can be traced across the shelf and even into the mouths of the rivers. Some canyons extend across the shelf, but others—for example, many of those shown in the charts prepared by Veatch and Smith—are limited to gashes in the continental slope and do not cut far across the shelf. The upper parts of the canyons are generally found to be steep-walled, V-shaped in profile, with the bottom sloping continuously outward (fig. 8). Some are winding, and many show a dendritic pattern, having smaller tributary canyons. In size they vary from small gullies to vast structures of the same dimensions as the Grand Canyon of the Colorado River (fig. 8). Profiles of submarine canyons. (A) Transverse profile of the submarine canyon in Monterey Bay compared to a profile of the Grand Canyon of the Colorado River in Arizona (cf. fig. 10). (B) Transverse profiles of a small, steep-walled canyon off the southern California coast. (C) Longitudinal profiles of the Lydonia Canyon and the adjacent shelf and slope. (D) and (E) Transverse and longitudinal profiles of the Hudson Canyon, showing the relation to the adjacent shelf and slope. The locations of the transverse sections (D) are shown on the longitudinal profile. Note the vertical exaggeration in certain of the diagrams and the differences in horizontal scales (A and B after Shepard, 1938; C, D, and E after Veatch and Smith, 1939). The steep walls of the canyons are generally free of unconsolidated sediment, and in those canyons where special investigations have been made the walls appear to be generally of sedimentary rock; in a few cases (for example, Monterey Canyon off the California coast, fig. 10) the canyons are cut into granite that is overlain by sedimentary rock. The sediments in the bottom of the canyons are generally coarser than those on the adjacent shelves, and in some of them cobbles and gravel have been found. The following agencies have been advanced as possible causes for the formation of the canyons: Erosion by submarine currents. Daly (1936) advanced the theory that “density currents” produced by suspension of much fine-grained sediment may have flowed down the slope and cut the canyons, particularly during intervals of lowered sea level during the glacial periods. Density currents occur in reservoirs, but there is no evidence of their existence in the sea, where the density stratification of the water impedes vertical flow. Spring sapping. Johnson (1939), in a thorough review of the literature concerning the character and origin of submarine canyons, develops the hypothesis that solution and erosion resulting from the outflow of underground water might contribute to the formation of the canyons. Mudflows and landslides. Mudflows are known to occur in the canyons (Shepard and Emery, 1941) and are agents which tend to keep the canyons clear of unconsolidated debris, but it is doubtful whether they are capable of eroding rock. Tsunamis or earthquake waves (p. 544). Bucher (1946) pointed out that most of the currents that might be found in canyons are of relatively low velocity and are therefore incapable of active erosion of rock. As a possible explanation of the submarine origin of the canyons he suggested that the rapid currents associated with earthquake waves set up in the sea by violent seismic motion of the sea bottom might be effective agents. Subaerial erosion. The five explanations listed above are compatible with the formation of the canyons below the sea surface. Because of their many resemblances to river-cut canyons on land, many investigators, notably Shepard, believe that the canyons must have had a subaerial origin. However, there is no accepted geological theory that would account for the world-wide exposure of the shelf and slope within relatively recent geological time. To overcome this difficulty, Shepard has suggested that during the ice ages the amount of water removed from the ocean and deposited as ice caps may have been much greater than ordinarily believed (p. 25). Topography of the shelf and slope off part of the eastern coast of the United States showing different types of submarine canyons. The Hudson Canyon can be traced far across the shelf; others, such as the Lydonia, Oceanographer, and Hydrographer Canyons, cut into the outer margin of the shelf, while others are restricted to the slope itself. Depth contours in fathoms. (Simplified from chart in Veatch and Smith, 1939.) Monterey Canyon off the coast of California. Shepard (Shepard and Emery, 1941) has carefully evaluated the arguments in favor of and opposed to these various hypotheses concerning the origin of submarine canyons, and he concludes that no single hypothesis yet advanced can account for their characteristic features. Problems also exist concerning the processes which remove the sedimentary debris that must be swept into the canyons from the shelf. Mudflows and transportation by currents are known to be operative, but their effectiveness has not yet been determined. The study of the development of shorelines has been carried out by geologists and physiographers who have classified the different types of coasts largely upon the basis of the extent to which erosion and deposition have affected the coastal configuration. Johnson (1919, 1925) has described the characteristic features of the coast and shallow-water zone, and these and other sources should be consulted in order to appreciate the complex nature of the transition zone between land and sea, where the effects of erosion and deposition, both subaerial and marine, must be Primary or youthful coasts with configurations due mainly to nonmarine agencies. Those shaped by terrestrial erosion agencies and drowned by deglaciation or down-warping. Those shaped by terrestrial depositional agencies such as rivers, glaciers, and wind. Those shaped by volcanic explosions or lava flows. Those shaped by diastrophic activity. Secondary or mature coasts with configurations primarily the result of marine agencies. Those shaped by marine erosion. Those shaped by marine deposition. The beach is defined as the zone extending from the upper and landward limit of effective wave action to low-tide level. Consequently, the beach represents the real transition zone between land and sea, since it is covered and exposed intermittently by the waves and tides. The characteristics of beaches depend so much upon the nature of the source material composing their sediments and the effects of the erosion, transportation, and deposition by waves and currents that they can be more profitably discussed in the chapter on marine sedimentation. The upper part of the beach is covered only during periods of high waves, particularly when storms coincide with high spring tides. The slope of the beach is largely determined by the texture of the sediments (p. 1018), but the extent of the beach will depend upon the range in tide. The terminology applied to the various parts of the beach and the adjacent regions is shown in fig. 11, taken from a report by the Beach Erosion Board (U. S. Beach Erosion Board, 1933). Beaches composed of unconsolidated material are characteristically regions of instability. Every wave disturbs more or less of the smaller sedimentary particles, and the character of the waves will determine whether or not there is a net removal or accretion of sediment in any Terminology applied to various parts of the beach profile. Berms are small impermanent terraces which are formed by deposition during calm weather and by erosion during storms. The plunge point is the variable zone where the waves break, hence its location depends on the height of the waves and the stage of the tide. Although subject to short-period disturbances, the beach in general represents an equilibrium condition, despite the slow erosion of the coast or the permanent deposition that may be taking place. If the normal interplay of waves and currents is impeded in any way, as by the building of piers, breakwaters, or jetties, the character of the beach may be entirely changed. In some instances, highly undesirable erosion of the coast may result, and in others equally undesirable deposition may result. These changes will proceed until a new equilibrium is established that may render the value of the structure worthless for the purpose for which it was originally intended. The construction of breakwaters, jetties, sea walls or groins, and similar structures on an open coast should be undertaken only after a careful investigation of the character and source of the sedimentary material, the prevailing currents, the strength and direction of the waves, and other factors that determine the equilibrium form of the beach. The Beach Erosion Board of the U. S. Army, Corps of Engineers, as well as various private organizations, are engaged in studies of this type. Kuenen, Ph. H., 1935. “Geological interpretation of the bathymetrical results. Snellius Exped. in the eastern part of the Netherlands East Indies 1929–1930” , v. 5, Geological Results, pt. 1, 123 pp. and charts, 1935. Utrecht. Shepard, Francis P.1941. Unpublished data. Stocks, Theodor. 1938. “Morphologie des Atlantischen Ozeans. Statistik der Tiefenstufen des Atlantischen Ozeans” . Deutsche Atlantische Exped., Meteor, 1925–1927, Wiss. Erg., Bd. 3, 1. Teil, 2. Lief., p. 35–151, 1938. van Riel, P. M.1934. “The bottom configuration in relation to the flow of the bottom water” . Snellius Exped. in the eastern part of the Netherlands East Indies 1929–1930, v. 2, Oceanographic results, pt. 2, chap. 2, 62 pp., 1934. Utrecht. Vaughan, Thomas Wayland. 1938. Recent additions to knowledge of the bottom configuration of the southern oceans. Congrès Internat. de Géographie, Amsterdam, 1938, Comptes rendus, IIb, Océanographie, p. 160–174, 1938. Vaughan, T. W., et al.1940. “Report of the Committee on the criteria and nomenclature of the major divisions of the ocean bottom” . Union Géod. et Géophys. Internat., Assn. d'Océanographie phys., Pub. sci., no. 8, 124 pp., 1940. Liverpool.
http://publishing.cdlib.org/ucpressebooks/view?docId=kt167nb66r&doc.view=content&chunk.id=ch02&toc.depth=1&anchor.id=0&brand=eschol
13
62
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used when simulating physical and mathematical systems. Because of their reliance on repeated computation and random or pseudo-random numbers, Monte Carlo methods are most suited to calculation by a computer. Monte Carlo methods tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm. The term Monte Carlo method was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory. There is no single Monte Carlo method; instead, the term describes a large and widely-used class of approaches. However, these approaches tend to follow a particular pattern: For example, the value of π can be approximated using a Monte Carlo method. Draw a square of unit area on the ground, then inscribe a circle within it. Now, scatter some small objects (for example, grains of rice or sand) throughout the square. If the objects are scattered uniformly, then the proportion of objects within the circle vs objects within the square should be approximately π/4, which is the ratio of the circle's area to the square's area. Thus, if we count the number of objects in the circle, multiply by four, and divide by the total number of objects in the square (including those in the circle), we get an approximation to π. Notice how the π approximation follows the general pattern of Monte Carlo algorithms. First, we define a domain of inputs: in this case, it's the square which circumscribes our circle. Next, we generate inputs randomly (scatter individual grains within the square), then perform a computation on each input (test whether it falls within the circle). At the end, we aggregate the results into our final result, the approximation of π. Note, also, two other common properties of Monte Carlo methods: the computation's reliance on good random numbers, and its slow convergence to a better approximation as more data points are sampled. If grains are purposefully dropped into only, for example, the center of the circle, they will not be uniformly distributed, and so our approximation will be poor. An approximation will also be poor if only a few grains are randomly dropped into the whole square. Thus, the approximation of π will become more accurate both as the grains are dropped more uniformly and as more are dropped. Random methods of computation and experimentation (generally considered forms of stochastic simulation) can be arguably traced back to the earliest pioneers of probability theory (see, e.g., Buffon's needle, and the work on small samples by William Gosset), but are more specifically traced to the pre-electronic computing era. The general difference usually described about a Monte Carlo form of simulation is that it systematically "inverts" the typical mode of simulation, treating deterministic problems by first finding a probabilistic analog (see Simulated annealing). Previous methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread. Perhaps the most famous early use was by Enrico Fermi in 1930, when he used a random method to calculate the properties of the newly-discovered neutron. Monte Carlo methods were central to the simulations required for the Manhattan Project, though were severely limited by the computational tools at the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo method is intended for financial analysts who want to construct stochastic or probabilistic financial models as opposed to the traditional static and deterministic models. Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms. Monte Carlo methods have also proven efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations which produce photorealistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, special effects in cinema, business, economics and other fields. Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can find the correct result. A classic example is Rabin's algorithm for primality testing: for any n which is not prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so, but another one without, but with a guarantee of not getting this answer when it is wrong too often — in this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea. The opposite of Monte Carlo simulation might be considered deterministic modelling using single-point estimates. Each uncertain variable within a model is assigned a “best guess” estimate. Various combinations of each input variable are manually chosen (such as best case, worst case, and most likely case), and the results recorded for each so-called “what if” scenario. [citation: David Vose: “Risk Analysis, A Quantitative Guide,” Second Edition, p. 13, John Wiley & Sons, 2000.] By contrast, Monte Carlo simulation considers random sampling of probability distribution functions as model inputs to produce hundreds or thousands of possible outcomes instead of a few discrete scenarios. The results provide probabilities of different outcomes occurring. [citation: Ibid, p. 16] For example, a comparison of a spreadsheet cost construction model run using traditional “what if” scenarios, and then run again with Monte Carlo simulation and Triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the “what if” analysis. This is because the “what if” analysis gives equal weight to all scenarios. [citation: Ibid, p. 17, showing graph] Monte Carlo methods provide a way out of this exponential time-increase. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the law of large numbers, this method will display convergence—i.e. quadrupling the number of sampled points will halve the error, regardless of the number of dimensions. A refinement of this method is to somehow make the points random, but more likely to come from regions of high contribution to the integral than from regions of low contribution. In other words, the points should be drawn from a distribution similar in form to the integrand. Understandably, doing this precisely is just as difficult as solving the integral in the first place, but there are approximate methods available: from simply making up an integrable function thought to be similar, to one of the adaptive routines discussed in the topics listed below. A similar approach involves using low-discrepancy sequences instead—the quasi-Monte Carlo method. Quasi-Monte Carlo methods can often be more efficient at numerical integration because the sequence "fills" the area better in a sense and samples more of the most important points that can make the simulation converge to the desired solution more quickly. Most Monte Carlo optimization methods are based on random walks. Essentially, the program will move around a marker in multi-dimensional space, tending to move in directions which lead to a lower function, but sometimes moving against the gradient. Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines a priori information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the a posteriori probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.). When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have a large number of model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available. The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution. For details, see Mosegaard and Tarantola (1995) , or Tarantola (2005) . What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest, and most common ones. The Wigner Monte Carlo method for nanoelectronic devices; a particle description of quantum transport and decoherence.(Brief article)(Book review) Dec 01, 2010; 9781848211506 The Wigner Monte Carlo method for nanoelectronic devices; a particle description of quantum transport and... Schrodinger Goes to Monte Carlo; the 'Adaptive Monte Carlo' Method Seeks Solutions for Chemical and Condensed-Matter Structures Aug 23, 1986; Schrodinger Goes to Monte Carlo Schrodinger's equation is basic to an understanding of atoms and molecules and the structures of...
http://www.reference.com/browse/wiki/Monte_Carlo_method
13
65
Ultimate Math Series - Over 20,000 Printables - For All Grade Levels Math Graphic Organizers - Helps Students Focus Better - For All Grade Levels and Content Areas Middle School Level Math Lesson Plans Integers - Students will use two different types of cereal to practice adding positive and negative integers. This will help them visualize how numbers cancel each other out. circumference, diameter, and radius - This activity will allow students to measure the circumference, diameter, and radius of a circle in a hands-on way. By being able to manipulate a circle and stretch it out the idea of circumference will be more concrete. Students will use each other, desks, and chairs to create circles that can be measured. points on a graph - The students will use graph paper to plot points on a graph. When the points are connected they will make a familiar shape, number, or letter. They will practice reading coordinates to each other, as well as, practice plotting them. The activity is meant to be fun and light, not competitive or stressful. - The students will explore and create a poster design using polygons. The posters will be displayed in the classroom and students will be challenged to name as many of them as they can. This activity will allow students a tactile, expressive way to learn about polygons. - The student will be able to identify objects that are symmetrical and draw half of an object by looking at the other half. Days of Christmas - Love = Cost? - By the end of this class students will be able to that information from a chart and determine the relationship that the number have. Plan for Problem Solving - Students will solve problems by using the Fractions and Mixed Numbers - To add fractions and mixed numbers with and Subtraction of Unlike Fractions - Students will be able to add and subtract fractions with unlike denominators by using their learned skills to find the greatest common factor and applying it to create fractions with like denominators. and Subtracting Decimals - Consider the size of a decimal prior to developing approaches to finding exact decimal sums or differences. Fractions - Students will be able to identify which fractions can be added, add and reduce fractions. Integers With Objects - The students will be able to add integers ranging from -10 through +10 without manipulatives by the end of the class Integers -10 to 10 - The students will be able to add integers ranging from -10 through +10 without manipulatives by the end of the class period. Positive and Negative Integers - Students will be able to add with positive and negative integers without the use of manipulatives by the end of the lesson. Wizards - Students will learn the concept of a variable. FOIL - The student will understand the procedure for multiplying 2 a $1 Bill - Students will be able to identify symbols and words on bill when asked. for Pizza? - Understanding fraction relationships. of Circles - The learner will select and use appropriate tools to measure two- and three-dimensional figures. of Triangle - Students will recognize and define three different triangles. Arithmetic Skills - The goal is to identify students whose weakness may be with adding, subtracting, multiplying, or dividing whole numbers. Color Distribution - Using graphs to investigate information. Cubes - Use table to identify possible outcomes of independent events. and Analyzing Data - This project requires students to conduct a statistical investigation to determine some typical characteristics of students in Multiples and Common Factor - How to find the least common multiple and the greatest common factor. and Order Fraction and Decimal Equivalence - As students enter the room they will be handed a sticky note that will have a fraction or decimal number written on it. Values - Students will first fill out the K and the W on the KWL chart. and Sales Tax - Students will be able to independently calculate discounts and sales tax. Pi - Students are often just told what Pi is, cut many are never able to find why Pi is Pi. Data - Students will display data using bar graphs, histograms and Angle Relationships - Learn to draw various angles and segments. the Number System - Represent integers on the number line. and Monomials - Determine whether one number is a factor of another and if certain expressions are monomials. Product/Quotient of Integers - Students will develop strategies for multiplying and dividing integers. Surface Area and Volume of Rectangular Prisms and Cylinders - Finding Volume and Surface of Rectangular Prisms Finding Volume and Surface of with Rotations - se their math skills to move an object from a fixed point to another fixed point. Decimals and Percents - In this lesson you be introduced to these new words: -numerator -denominator -improper fractions -mixed number fractions Decimals, and Percents - Children will extend their understanding of the place value system to include decimals. and Planes - Students will be able to measure, classify, estimate, and draw angles. Manipulative - Use circles to construct regular polygons Use circles to identify rotational and reflectional symmetry. Relationships - The activities in this chapter teach students the basic techniques of assembling coordinate graphs. Fun with Decimals - Demonstrate concepts of converting decimals to Math - My goal for this lesson is for the students to be able to connect what they learn in math to everyday activities and how that connects to on a Number Line - The learner will graph inequalities on a number Quick, Fun, Easy to Learn - Students will identify and understand positive and negative numbers. + Pythagoras - Develop the student's ability to visualize geometric relationships, esp. in 3 dimensions. Fractions - The learner will accurately multiple fractions with fractions and fractions with whole numbers. is hip to be a Square" - Relate geometry to algebra by using coordinate geometry to determine regularity, congruence, and similarity. Numbers: Place Value - How do we read large numbers? Pull - Explore concepts of probability through data collection, experiments, Multiplication - Identify matrix dimensions and determine whether the product is defined and if so, what the matrix product dimensions will Trigonometry - We are learning to find the trigonometry in multi direction Skills and Techniques To Use - Teacher will explain the importance of learning and maintaining organizational skills. and Area of Squares - Students will find the perimeter and the area of rectangles and squares using proper formulas. Factorization - TSW be able to write prime factorization using exponents and to describe exponents as a way of expressing repeated multiplication. Solving Skills - Student will be able to identify a pattern that indicates what operation to use. on a Grid Part 2 - Students will continue their quest of knowledge of plotting through the introduction of the negative quadrants and coordinates. and Improper Fractions - The learner will understand and compute with and Exponents - Students will learn to translate products into exponents as well as exponents into products of the same factor. Theorem - Problem of the snitch and its case. This will lead to the discussion of Pythagoras theorem. - Students will review ratios and equivalent ratios and then find unit rate using ratios. prisms; S.A. and V - Calculating surface area and volume of rectangular to the Nearest Tenth - Students will be able to identify tenths and hundredths place values in decimal numbers. Factor - The learner will understand and apply scale factor in all and Importance of Retailing - To distinguish the meaning of retailing. Lengths of Triangles - The student will be able to identify triangles using the correct terminology based on the measurement of the side lengths. Fractions at Fractionville - Develop students' broader understanding of the simplification process and fractions. Says - We look at data management and making sense of data. Area of Prisms - Identify the different faces of 3d figures. Measurement Hunt - Students will develop/demonstrate an understanding of measurement by the inch and foot using a ruler and through estimating measurements by comparison. Multiplication of Binomials - Students will be able to utilize the FOIL method when multiplying two binomials together. Boots Are Made For Walking - Students will use calculators to calculate the time it takes to travel to the moon using different methods of transportation. Pi - This activity allows students to discover why pi works in solving problems dealing with finding circumference. the Use of Place Value - Students will demonstrate understanding of the base-ten place value system. Equations and Inequalities - Students will be able to solve verbal problems by translating them into equations and inequalities. Prime Factorizations - Exploring a method for finding the greatest common factor and least common multiple of two numbers using prime factorizations. and Expressions Vocabulary - At the end of this lesson the students will be able to identify the key vocabulary words as well as write algebraic of Three-dimensional Figures - The student will become familiar with and use the terms views, isometric drawing, layers, and perspective to describe, draw and build a three dimensional figure. you put in you will get out! - Generating input/output tables Checks and Keeping a Checkbook - The students will learn how to write checks and balance a checkbook. in Scientific Notation - To write very large numbers using scientific 50 Middle School Math Lesson Plans Mastering The Order Of Operations Pack - The PEMDAS System - Makes It Simple! Algebra Starter Pack - Skill Progress - Uses Algebra Standards
http://www.teach-nology.com/teachers/lesson_plans/math/68/
13
229
Famous Theorems of Mathematics/Pythagoras theorem The Pythagoras Theorem or the Pythagorean theorem, named after the Greek mathematician Pythagoras states that: In any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle). This is usually summarized as follows: The square of the hypotenuse of a right triangle is equal to the sum of the squares on the other two sides. If we let c be the length of the hypotenuse and a and b be the lengths of the other two sides, the theorem can be expressed as the equation: or, solved for c: If c is already given, and the length of one of the legs must be found, the following equations can be used (The following equations are simply the converse of the original equation): This equation provides a simple relation among the three sides of a right triangle so that if the lengths of any two sides are known, the length of the third side can be found. A generalization of this theorem is the law of cosines, which allows the computation of the length of the third side of any triangle, given the lengths of two sides and the size of the angle between them. If the angle between the sides is a right angle it reduces to the Pythagorean theorem. The history of the theorem can be divided into four parts: knowledge of Pythagorean triples, knowledge of the relationship between the sides of a right triangle, knowledge of the relationship between adjacent angles, and proofs of the theorem. Megalithic monuments from circa 2500 BC in Egypt, and in Northern Europe, incorporate right triangles with integer sides. Bartel Leendert van der Waerden conjectures that these Pythagorean triples were discovered algebraically. Written between 2000 and 1786 BC, the Middle Kingdom Egyptian papyrus Berlin 6619 includes a problem whose solution is a Pythagorean triple. During the reign of Hammurabi the Great, the Mesopotamian tablet Plimpton 322, written between 1790 and 1750 BC, contains many entries closely related to Pythagorean triples. The Baudhayana Sulba Sutra, the dates of which are given variously as between the 8th century BC and the 2nd century BC, in India, contains a list of Pythagorean triples discovered algebraically, a statement of the Pythagorean theorem, and a geometrical proof of the Pythagorean theorem for an isosceles right triangle. The Apastamba Sulba Sutra (circa 600 BC) contains a numerical proof of the general Pythagorean theorem, using an area computation. Van der Waerden believes that "it was certainly based on earlier traditions". According to Albert Bŭrk, this is the original proof of the theorem; he further theorizes that Pythagoras visited Arakonam, India, and copied it. Pythagoras, whose dates are commonly given as 569–475 BC, used algebraic methods to construct Pythagorean triples, according to Proklos's commentary on Euclid. Proklos, however, wrote between 410 and 485 AD. According to Sir Thomas L. Heath, there is no attribution of the theorem to Pythagoras for five centuries after Pythagoras lived. However, when authors such as Plutarch and Cicero attributed the theorem to Pythagoras, they did so in a way which suggests that the attribution was widely known and undoubted. Around 400 BC, according to Proklos, Plato gave a method for finding Pythagorean triples that combined algebra and geometry. Circa 300 BC, in Euclid's Elements, the oldest extant axiomatic proof of the theorem is presented. Written sometime between 500 BC and 200 AD, the Chinese text Chou Pei Suan Ching (周髀算经), (The Arithmetical Classic of the Gnomon and the Circular Paths of Heaven) gives a visual proof of the Pythagorean theorem — in China it is called the "Gougu Theorem" (勾股定理) — for the (3, 4, 5) triangle. During the Han Dynasty, from 202 BC to 220 AD, Pythagorean triples appear in The Nine Chapters on the Mathematical Art, together with a mention of right triangles. The first recorded use is in China, known as the "Gougu theorem" (勾股定理) and in India known as the Bhaskara Theorem. There is much debate on whether the Pythagorean theorem was discovered once or many times. Boyer (1991) thinks the elements found in the Shulba Sutras may be of Mesopotamian derivation. This is a theorem that may have more known proofs than any other; the book Pythagorean Proposition, by Elisha Scott Loomis, contains 367 proofs. Proof using similar triangles File:Proof-Pythagorean-Theorem.svg Like most of the proofs of the Pythagorean theorem, this one is based on the proportionality of the sides of two similar triangles. Let ABC represent a right triangle, with the right angle located at C, as shown on the figure. We draw the altitude from point C, and call H its intersection with the side AB. The new triangle ACH is similar to our triangle ABC, because they both have a right angle (by definition of the altitude), and they share the angle at A, meaning that the third angle will be the same in both triangles as well. By a similar reasoning, the triangle CBH is also similar to ABC. The similarities lead to the two ratios..: As These can be written as Summing these two equalities, we obtain In other words, the Pythagorean theorem: Euclid's proof In Euclid's Elements, Proposition 47 of Book 1, the Pythagorean theorem is proved by an argument along the following lines. Let A, B, C be the vertices of a right triangle, with a right angle at A. Drop a perpendicular from A to the side opposite the hypotenuse in the square on the hypotenuse. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs. For the formal proof, we require four elementary lemmata: - If two triangles have two sides of the one equal to two sides of the other, each to each, and the angles included by those sides equal, then the triangles are congruent. (Side - Angle - Side Theorem) - The area of a triangle is half the area of any parallelogram on the same base and having the same altitude. - The area of any square is equal to the product of two of its sides. - The area of any rectangle is equal to the product of two adjacent sides (follows from Lemma 3). The intuitive idea behind this proof, which can make it easier to follow, is that the top squares are morphed into parallelograms with the same size, then turned and morphed into the left and right rectangles in the lower square, again at constant area. The proof is as follows: - Let ACB be a right-angled triangle with right angle CAB. - On each of the sides BC, AB, and CA, squares are drawn, CBDE, BAGF, and ACIH, in that order. - From A, draw a line parallel to BD and CE. It will perpendicularly intersect BC and DE at K and L, respectively. - Join CF and AD, to form the triangles BCF and BDA. - Angles CAB and BAG are both right angles; therefore C, A, and G are collinear. Similarly for B, A, and H. - Angles CBD and FBA are both right angles; therefore angle ABD equals angle FBC, since both are the sum of a right angle and angle ABC. - Since AB and BD are equal to FB and BC, respectively, triangle ABD must be equal to triangle FBC. - Since A is collinear with K and L, rectangle BDLK must be twice in area to triangle ABD. - Since C is collinear with A and G, square BAGF must be twice in area to triangle FBC. - Therefore rectangle BDLK must have the same area as square BAGF = AB2. - Similarly, it can be shown that rectangle CKLE must have the same area as square ACIH = AC2. - Adding these two results, AB2 + AC2 = BD × BK + KL × KC - Since BD = KL, BD* BK + KL × KC = BD(BK + KC) = BD × BC - Therefore AB2 + AC2 = BC2, since CBDE is a square. This proof appears in Euclid's Elements as that of Proposition 1.47. Garfield's proof James A. Garfield (later President of the United States) is credited with a novel algebraic proof using a trapezoid containing two examples of the triangle, the figure comprising one-half of the figure using four triangles enclosing a square shown below. Similarity proof From the same diagram as that in Euclid's proof above, we can see three similar figures, each being "a square with a triangle on top". Since the large triangle is made of the two smaller triangles, its area is the sum of areas of the two smaller ones. By similarity, the three squares are in the same proportions relative to each other as the three triangles, and so likewise the area of the large square is the sum of the areas of the two smaller squares. Proof by rearrangement A proof by rearrangement is given by the illustration and the animation. In the illustration, the area of each large square is (a + b)2. In both, the area of four identical triangles is removed. The remaining areas, a2 + b2 and c2, are equal. Q.E.D. This proof is indeed very simple, but it is not elementary, in the sense that it does not depend solely upon the most basic axioms and theorems of Euclidean geometry. In particular, while it is quite easy to give a formula for area of triangles and squares, it is not as easy to prove that the area of a square is the sum of areas of its pieces. In fact, proving the necessary properties is harder than proving the Pythagorean theorem itself and Banach-Tarski paradox. Actually, this difficulty affects all simple Euclidean proofs involving area; for instance, deriving the area of a right triangle involves the assumption that it is half the area of a rectangle with the same height and base. For this reason, axiomatic introductions to geometry usually employ another proof based on the similarity of triangles (see above). A third graphic illustration of the Pythagorean theorem (in yellow and blue to the right) fits parts of the sides' squares into the hypotenuse's square. A related proof would show that the repositioned parts are identical with the originals and, since the sum of equals are equal, that the corresponding areas are equal. To show that a square is the result one must show that the length of the new sides equals c. Note that for this proof to work, one must provide a way to handle cutting the small square in more and more slices as the corresponding side gets smaller and smaller. Algebraic proof An algebraic variant of this proof is provided by the following reasoning. Looking at the illustration which is a large square with identical right triangles in its corners, the area of each of these four triangles is given by an angle corresponding with the side of length C. The A-side angle and B-side angle of each of these triangles are complementary angles, so each of the angles of the blue area in the middle is a right angle, making this area a square with side length C. The area of this square is C2. Thus the area of everything together is given by: However, as the large square has sides of length A + B, we can also calculate its area as (A + B)2, which expands to A2 + 2AB + B2. - (Distribution of the 4) - (Subtraction of 2AB) Proof by differential equations One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse in the following diagram and employing a little calculus. As a result of a change in side a, by similar triangles and for differential changes. So upon separation of variables. which results from adding a second term for changes in side b. When a = 0 then c = b, so the "constant" is b2. So As can be seen, the squares are due to the particular proportion between the changes and the sides while the sum is a result of the independent contributions of the changes in the sides which is not evident from the geometric proofs. From the proportion given it can be shown that the changes in the sides are inversely proportional to the sides. The differential equation suggests that the theorem is due to relative changes and its derivation is nearly equivalent to computing a line integral. These quantities da and dc are respectively infinitely small changes in a and c. But we use instead real numbers Δa and Δc, then the limit of their ratio as their sizes approach zero is da/dc, the derivative, and also approaches c/a, the ratio of lengths of sides of triangles, and the differential equation results. The converse of the theorem is also true: For any three positive numbers a, b, and c such that a2 + b2 = c2, there exists a triangle with sides a, b and c, and every such triangle has a right angle between the sides of lengths a and b. This converse also appears in Euclid's Elements. It can be proven using the law of cosines, or by the following proof: Let ABC be a triangle with side lengths a, b, and c, with a2 + b2 = c2. We need to prove that the angle between the a and b sides is a right angle. We construct another triangle with a right angle between sides of lengths a and b. By the Pythagorean theorem, it follows that the hypotenuse of this triangle also has length c. Since both triangles have the same side lengths a, b and c, they are congruent, and so they must have the same angles. Therefore, the angle between the side of lengths a and b in our original triangle is a right angle. A corollary of the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows. Where c is chosen to be the longest of the three sides: - If a2 + b2 = c2, then the triangle is right. - If a2 + b2 > c2, then the triangle is acute. - If a2 + b2 < c2, then the triangle is obtuse. Consequences and uses of the theorem Pythagorean triples A Pythagorean triple has 3 positive numbers a, b, and c, such that . In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths. Evidence from megalithic monuments on the Northern Europe shows that such triples were known before the discovery of writing. Such a triple is commonly written (a, b, c). Some well-known examples are (3, 4, 5) and (5, 12, 13). List of primitive Pythagorean triples up to 100 (3, 4, 5), (5, 12, 13), (7, 24, 25), (8, 15, 17), (9, 40, 41), (11, 60, 61), (12, 35, 37), (13, 84, 85), (16, 63, 65), (20, 21, 29), (28, 45, 53), (33, 56, 65), (36, 77, 85), (39, 80, 89), (48, 55, 73), (65, 72, 97) The existence of irrational numbers One of the consequences of the Pythagorean theorem is that irrational numbers, such as the square root of 2, can be constructed. A right triangle with legs both equal to one unit has hypotenuse length square root of 2. The Pythagoreans proved that the square root of 2 is irrational, and this proof has come down to us even though it flew in the face of their cherished belief that everything was rational. According to the legend, Hippasus, who first proved the irrationality of the square root of two, was drowned at sea as a consequence. Distance in Cartesian coordinates The distance formula in Cartesian coordinates is derived from the Pythagorean theorem. If (x0, y0) and (x1, y1) are points in the plane, then the distance between them, also called the Euclidean distance, is given by More generally, in Euclidean n-space, the Euclidean distance between two points, and , is defined, using the Pythagorean theorem, as: - Pythagorean Theorem: Subtle Dangers of Visual Proof by Alexander Bogomolny, retrieved 19 December 2006.
http://en.wikibooks.org/wiki/Famous_Theorems_of_Mathematics/Pythagoras_theorem
13
99
Special relativity is an extension of rational mechanics where velocity is limited to the speed of light c, a constant in the vacuum of matter and radiation. Speed limit In classical mechanics, the absolute velocity is the sum of the velocity of the moving reference frame and the velocity relative to the moving reference frame. In special relativity one has to take into account the speed limit, e.g. the light speed. For colinear velocities, we get, as we shall show below : where vx is the"absolute" and v'x the relative velocity. According to the relativity theory, all the velocities are relative; that's why the absolute velocity is replaced by the velocity in R, the observer's frame. The relative velocity v'x is the velocity in the frame R' moving relatively to the observer in frame R. The frame R' moves at velocity v relatively to R. This formula gives a speed limit as may be seen by replacing v'x by c to get vx=c. For an infinite light speed one gets the Galilean addition of velocities: Galilean transformation In classical kinematics displacement is proportional to the velocity and time is independent of the velocity. When changing the reference frame, the total displacement x in reference frame R is the sum of the relative displacement x’ in R’ and of the displacement vt of R’ relative to R at a velocity v : This relation is linear when the velocity v is constant, that is when the frames R and R' are Galilean reference frames. Derivation of the Lorentz transformation General linear transformation The more general relationship, with four constants α, β, γ and v is : The Lorentz transformation becomes the Galilean one for β = γ = 1 et α = 0. Light invariance principle The velocity of light is independent of the velocity of the source, as was shown by Michelson. We thus need to have x = ct if x’ = ct’. Replacing x and x' in these two equations, we have Replacing t' from the second equation, the first one becomes After simplification by t and dividing by cβ, one obtains : Relativity principle The relativity principle postulates that there are no preferred reference frames, at least for Galilean reference frames. The following derivation does not use the speed of light and allows therefore to separate it from the principle of relativity. The inverse transformation of In accord with the principle of relativity, the expressions of x and t should be the same when permuting R and R' except for the sign of the velocity : Identifying the preceding equations, we have the following identities, verified independently of x’ and t’ : This gives the following equalities : The Lorentz transformation Using the above relationship we get : We have now all the four coefficients needed for the Lorentz transformation which writes in two dimensions : The inverse Lorentz transformation writes : The true base of special relativity is the Lorentz transformation generalizing that of Galieo at velocities near that of light. The Lorentz transformation expresses the transformation of space and time, both depending on the relative velocity between the observer's and relative frames R and R'. Another demonstration may be found in Einstein's book . The direct Lorentz transformation is, in two dimensions: The inverse Lorentz transformation is: We have then four equations to be used as needed, using the Lorentz factor : Velocity addition law The Lorentz transformation remains valid in differential form for a constant velocity : From these two formulas we get the formula at the top of this page: Minkowski metric The euclidean space is characterised by the validity of the Pythagoran theorem which may be written as a two-dimensional metric: With y=ict, one obtains the Minkowski metric representing the pseudo-euclidean space of special relativity: where v is the velocity of the frame R' relative to R. The differential form of the Lorentz transformation writes: Replacing x and t as a function of x' and t' with the Lorentz transformation , one obtains the same Minkowski metric except for the primes: Let us develop and simplify: The Minkowski metric is conserved in the Lorentz transformation. Time dilation Let us consider a clock in its rest frame R' moving at a velocity v relative to a frame R where is an observer. The clock rate is Δt’ at rest, in its proper frame R' and Δt viewed from R. Since the clock is at rest in R', its position is constant in R', say x'=0. To apply the Lorentz transformation, we have to choose the right equation among the four of the direct and reciprocal Lorentz transformation. We choose the one containing Δt’, Δt and x': The time interval between two beats appears larger on a moving clock than on a clock at rest. One says that time is dilated or that the clock is running slow. The time of the moving clock does not flow any more when the clock moves at light speed, but only for the distant observer, at rest. A high speed particle of limited lifetime like a meson coming from outer space, will have an apparently much larger lifetime when viewed from the Earth but its proper lifetime remains unchanged. Let us consider now that an observer places himself in the moving frame R' and looks at a clock placed in the rest frame R. We shall have the same formula, but with t and t' reversed. Indeed the movement is relative; there is no absolute movement but a symmetry between both Galilean frames. Length contraction Now consider a ruler at rest in a frame R' moving at a constant velocity v relative to a frame R where is an observer. The length at rest of this ruler is Δx' for an observer in R'. The ruler appears to have a length Δx for the observer in R. In order to measure the length of the ruler, the latter has to take an instantaneous picture of the two rulers, for example at time t=0, with Δt=0. He obtains then their lengths Δx and Δx’. He will use the equation of the Lorentz transformation where Δx, Δx’ and Δt appear: This formula has the same form as for the time, except that the primes are on the left side. For this reason, lengths contract instead of dilating for the time. Then one writes usually: Acceleration transformation Lorentz method In classical kinematics, accelerations do not depend on the velocity of the Galilean frame since the velocity of the frame being constant, its derivative, the acceleration, is zero. In special relativity, due to both time dilation and length contraction, the change of Galilean frame changes acceleration. Let be dvx/dt and dv'x/dt the accelerations of a particle of abscissas x and x' in the frames R of the observer and R', moving. Since the acceleration is the second derivative of space relative to time and if the frames R and R' are approximately Galilean, the Lorentz factor γ is a constant to the fourth order, as pointed out by Einstein (the slowly accelerated electron) , we have: Let us bind the accelerated particle to its frame R'. We then have v=vx; the frames are thus no more Galilean. At low speeds, we are in the newtonian domain where γ≈1; the accelerations are practically equal in R et R'. For a velocity near the speed of light, the variation dv/dt of the velocity is small and, then, the acceleration, as viewed from R, is low. In both cases the frames are approximately Galilean. It is only at intermediate velocities that γ may be large with a variation of the velocity not negligible. This approximation seems to be valid according to the few available experimental data. We may write v'=vx' : Using an identity due, as it seems, to Lorentz (H. A. Lorentz, The theory of electrons and its applications to the phenomena of light and radiant heat, Courier Dover Publications, 2003), we have one obtains then: General method Starting with the Lorentz Transform for velocity in the S' frame, as well as the transform for time in the S' frame we are able to derive the equation for acceleration in Relativistic circumstances. Taking the differential of both, and remembering to use the quotient rule, we get: Now we divide these two equations and divide top and bottom by dt: A little more simplification yields the end product: Relativistic Newton's Second Law of Motion Let us multiply both sides of the acceleration transformation equation by the constant rest mass m0: In the frame R' where the velocity of the particle is low (in fact zero), one may apply Newton's Second Law of Motion. The right side represents the force F' in the frame R'. If one admits that the force does not depend one frame since it is applied to the particle, we have F=F' and, then where mr is the relativistic mass, appearing to the distant observer, varying in function of the velocity: Kinetic energy In a frame moving at velocity v relative to the observer, contrarily to the Galilean transformation, the Lorentz transformation gives an acceleration depending on the relative speeds of the referentials, even Galileans (we limit ourselves to the case where velocity and acceleration are colinear). In order to produce the acceleration a = dv/dt, it is necessary to apply a force, defined by the relativistic Newton's Second Law of Motion which is a derivative relative to time of the momentum mrv. The variation dT of the kinetic energy being equal to the work of the applied force F for a displacement dx, we have: Let us use an identity similar to that of Lorentz above: The variation of the kinetic energy becomes dT=m0dγ. Integrating this equation, one obtains: The kinetic energy should be zero when the velocity v is zero, e.g. when γ=1. The integration constant is thus: The kinetic energy is: equal to the difference between rest mass m0 and relativistic mass mr multiplied by the universal factor . These two masses have indices in order to avoid any confusion. Total relativistic energy E=mc² The total relativistic energy E = mc² must not be confused with the total classical mechanical energy. The sum of the kinetic energy and the potential energy remains a constant value without being an absolute value. Drivers know that the distance they may travel is proportional to their volume or mass of gas with a coefficient K depending on its heat content. It may be assumed that there is a maximum value of energy of any type contained in a given mass. The maximum energy avalable in a given mass is obtained when all the mass is converted into energy (radiative,thermal, mechanical, electrical…). A higher energy content is impossible, because there is no matter anymore. The problem is to find the universal coefficient K. Let us apply relativity. The maximum energy available is then Er = Kmr in the observer's frame and E0=Km0 in its proper frame of the object of mass m0. The difference between these two energies is is due uniquely to the velocity, the relativistic mass depending only on the rest mass and the relative velocity between the object and the observer. The application of the Lorentz transformation, of Newton's law and of the definition of energy has shown in the preceding paragraph that the relativistic kinetic energy is: Identifying these last equations, one finds The total relativistic energy is then: where m is the mass, static or dynamic, depending if the relative velocity is small or comparable to the speed of light. We have derived the most celebrated equation of the twentieth century from first principles: — Linear transformation of space and time — Light invariance principle — Relativity principle — Newton's second law of motion See also - Albert Einstein, Relativity: The Special and General Theory - A. Einstein, Ann. Phys. 17, 891, 1905 - [H. A. Lorentz, The theory of electrons and its applications to the phenomena of light and radiant heat, Courier Dover Publications, 2003]
http://en.wikiversity.org/wiki/Special_Relativity
13
69
A constant is a value that doesn't change. There are two types of constants you will use in your programs: those supplied to you and those you define yourself. To create a constant to use in your program type the Const keyword followed by a name for the variable, followed by the assignment operator "=", and followed by the value that the constant will hold. Here is an example: Module Exercise Sub Main() Const DateOfBirth = #12/5/1974# MsgBox(DateOfBirth) End Sub End Module When defining a constant like this, the compiler would know the type of data to apply to the variable. In this case the DateOfBirth constant holds a Date value. Still, to be more explicit, you can indicate the type of value of the constant by following its name with the As keyword and the desired data type. Based on this, the above program would be: Module Exercise Sub Main() Const DateOfBirth As Date = #12/5/1974# MsgBox("Date of Birth: " & DateOfBirth) End Sub End Module When creating a constant, if its value supports a type character, instead of using the As keyword, you can use that type character. Here is an example: Module Exercise Sub Main() Const Temperature% = 52 End Sub End Module As mentioned earlier, the second category of constants are those that are built in the Visual Basic language. Because there are many of them and used in different circumstances, when we need one, we will introduce and then use it. So far, to initialize a variable, we were using a known value. Alternatively, you can use the Nothing constant to initialize a variable, simply indicating that it holds a value that is not (yet) defined. Here is an example: Module Exercise Sub Main() Dim DateOfBirth As Date = Nothing End Sub End Module If you use the Nothing keyword to initialize a variable, the variable is actually initialized to the default value of its type. For example, a number would be assigned 0, a date would be assigned January 1, 0001 at midnight. The scope of a variable determines the areas of code where the variable is available. You may have noticed that, so far, we declared all our variables only inside of Main(). Actually, the Visual Basic language allows you to declare variables outside of Main() (and outside of a particular procedure). A variable declared inside of a procedure such as Main() is referred to as a local variable. Such as variable cannot be accessed by another part (such as another procedure) of the program. A variable that is declared outside of any procedure is referred to as global. To declare a global variable, use the same formula as we have done so far. For example, just above the Sub Main() line, you can type Dim, followed by the name of the variable, the As keyword, its type and an optional initialization. Here is an example: Module Exercise Dim DateOfBirth As Date Sub Main() End Sub End Module As mentioned above, you can initialize the global variable when or after declaring it. Here are two examples: Module Exercise Dim UnitPrice As Double Dim DateOfBirth As Date = #12/5/1974# Sub Main() UnitPrice = 24.95 MsgBox("Date of Birth: " & DateOfBirth) MsgBox("Unit Price: " & UnitPrice) End Sub End Module As we will see when studying procedures, a global variable can be accessed by any other procedure (or even class) of the same file. In most cases, a global variable must be declared inside of a module, that is, between the Module ModName and the End Module lines but outside of a procedure. Based on this, such a variable is said to have module scope because it is available to anything inside of that module. In the small programs we have created so far, we were using only one file. A typical application uses as many files as necessary. You can use one file to list some objects used in other files. As we move on, we will see different examples of creating different files in the same program. In the Visual Basic language, a file that holds Visual Basic code is called a module. As mentioned above, a module is primarily a file that holds code. Therefore, there is no complication with creating one. It is simply a file that holds the .vb extension. If you create a console application using the Console Application option of the New Project dialog box, Microsoft Visual Studio would create a default file for and would insert the module template code. To create a module in Microsoft Visual Studio or Microsoft Visual Basic 2008 Express Edition, on the main menu, you can click Project -> Add Module... This would display the Add New Item dialog box with the Module selected as default in the Templates list. The studio would also suggest a default name. You can accept that name or change it. The name of the module follows the rules of an object in the Visual Basic language. Once you are ready with the dialog box, you can click Add. If you are manually creating your code from Notepad or any text editor, you can simply create any file in your folder and give it the .vb extension. Probably the most important thing in a module is that the area that contains its code must start with a Module ModuleName and end with an End Module line: Module ModuleName End Module Anything between these two lines is part of the normal code and everything that is normal code of the module must be inserted between these two lines. No code should be written outside of these two lines. After creating a module and adding its required two lines, you can add the necessary code. Of course, there are rules you must follow. At a minimum, you can declare one or more variables in a module, just as we have done so far. Here is an example: Module Exercise Dim FullName As String End Module Each module of a project is represented in the Solution Explorer by a name under the project node. To open a module using the Solution Explorer: If there are many opened module, each is represented in the Code Editor by a label and by an entry in the Windows menu. Therefore, to access a module: As you may have realised, when you start a console application, Microsoft Visual Basic creates a default module and names it Module1. Of course, you can add as many modules as necessary. At any time, you can change the name of a module. To rename a module, in the Solution Explorer If you have a module you don't need anymore, to delete it, in the Solution Explorer, right-click it and click Delete. You will receive a warning to confirm your intentions or to change your mind. As mentioned already, you can use more than one module in a project and you can declare variables in a module. This allows different modules to exchange information. For example, if you are planning to use the same variable in more than section of your application, you can declare the variable in one module and access that variable in any other module in the application. A variable that is declared in one module and can be accessed from another module in the same application is referred to as a friend. Variables are not the only things that can benefit from this characteristic. We will see other types. To declare a friendly variable, instead of Dim, you use the Friend keyword. Here is an example: Module Exercise Friend FullName As String End Module After declaring such a variable, you can access it from any module of the same application. Here is an example: Instead of allowing a member of a module to be accessible outside the module, you may want to restrict this access. The Visual Basic language allows you to declare a variable that can be accessed only within the module that contains it. No code outside the module would be able to "see" such a member. A member inside a module and that is hidden from other modules is referred to as private. To declare a private variable, instead of Dim or Friend, you use the Private keyword. Here is an example: Module Exercise Friend FullName As String Private DateHired As Date Sub Main() FullName = "Gertrude Monay" DateHired = #4/8/2008# Dim Information As String Information = "Full Name: " & FullName & vbCrLf & "Date Hired: " & DateHired MsgBox(Information) End Sub End Module This would produce: When working on a project, you may want to create objects or declare variables that you want to be accessible from other applications. Such a member is referred to as public. To declare a variable that can be accessed by the same modules of the same project and modules of other projects, declare it using the Public keyword. Here is an example: Module Exercise Friend FullName As String Private DateHired As Date Public HourlySalary As Double Sub Main() FullName = "Gertrude Monay" DateHired = #4/8/2008# HourlySalary = 36.75 Dim Information As String Information = "Full Name: " & FullName & vbCrLf & "Date Hired: " & DateHired & vbCrLf & "Hourly Salary: " & HourlySalary MsgBox(Information) End Sub End Module This would produce: The Friend, Private, and Public keywords are called access modifiers because they control the level of access that a member of a module has. In previous sections, we saw how to control the members of a module. The level of access of a module itself can also be controlled. To control the level of access of a module, you can precede the Module keyword with the desired access modifier. The access modifier of a module can only be either Friend or Public. Here are examples: Because a program can use different variables, you can declare each variable on its own line. Here are examples: Module Exercise Sub Main() Dim NumberOfPages As Integer Dim TownName As String Dim MagazinePrice As Double End Sub End Module It is important to know that different variables can be declared with the same data type as in the following example: Module Exercise Sub Main() Dim NumberOfPages As Integer Dim Category As Integer Dim MagazinePrice As Double End Sub End Module When two variables use the same data type, instead of declaring each on its own line, you can declare two or more of these variables on the same line. There are two techniques you can use: You can use the same techniques when declaring many global variables. Here are examples: Module Exercise Friend FirstName As String, LastName As String Private DateHired As Date, HourlySalary As Double Sub Main() End Sub End Module After declaring the variables, you can initialize and use them as you see fit. We have indicated that when a variable is declared, it receives a default initialization unless you decide to specify its value. Whether such a variable has been initialized or not, at any time, you can change its value by reassigning it a new one. Here is an example: Module Exercise Sub Main() ' Initializing a variable when declaring it Dim Number As Double = 155.82 MsgBox("Number: " & Number) ' Changing the value of a variable after using it Number = 46008.39 MsgBox("Number: " & Number) End Sub End Module This would produce: In the same way, we saw that you could declare a variable at module scope outside of Main and then initialize or change its value when necessary. Here is an example: Module Exercise Private UnitPrice As Double Private DateOfBirth As Date = #12/5/1974# Private Number As Double Sub Main() ' Initializing a variable Number = 155.82 MsgBox("Number: " & Number) ' Changing the value of a variable after using it Number = 46008.39 MsgBox("Number: " & Number) End Sub End Module When declaring a variable, as the programmer, you should have an idea of how you want to use the variable and what type of values the variable should hold. In some cases, you may want the variable to hold a constant value and not be able to change it. We saw earlier that such a variable could be declared as a constant. An alternative is to declare it with the ReadOnly keyword. While a constant variable can be declared locally, a ReadOnly variable cannot. It must be declared globally. As done for a constant, when declaring a ReadOnly variable, you should initialize it. If you do not, the compiler would assign the default value based on its type. For example, a number-based variable would be initialized with 0 and a String variable would be initialized with an empty string. As done so far, to initialize the variable, use the assignment operator followed by the desired value. Like a constant, after declaring and optionally initializing a ReadOnly variable, you cannot change its value. Based on this, the following code would produce an error: Module Exercise Dim UnitPrice As Double Dim DateOfBirth As Date = #12/5/1974# ReadOnly Number As Double = 155.82 Sub Main() ' Initializing a variable Number = 155.82 MsgBox("Number: " & Number) ' Changing the value of a variable after using it Number = 46008.39 ' Error: You cannot assign a value to ' a ReadOnly variable after initializing it MsgBox("Number: " & Number) End Sub End Module In the Microsoft Visual Basic 2010, the parser would signal the errors by underlining the read-only variable when you try changing its value. A window named Error List would also point out the problems: This means that a ReadOnly variable must be assigned a value once, when initializing it. As mentioned already, you will write your code in normal text editors, whether using Notepad, the Code Editor of Microsoft Visual Studio, or else. Also, you may be familiar already with how to look for a character, a symbol, a word, or a group of words in a document. Just as reminder, on the main menu of the application, you can click Edit -> Find... This would display a dialog box where you can type the item and click Find. If you are using Microsoft Visual Studio and if you want to find different occurrences of a known character, symbol, word, or group of words, first select that item. Then: In the same way, if you have a variable that is used more than once in your code and you want to see all places where that variable is used, simply click the name (and wait two seconds) and all of its occurrences would be highlighted: To get a list of all sections where the variable is used, if you are using Microsoft Visual Studio: This would produce a list of all sections where the variable is used and would display the list in the Find Symbol Results window: To access a particular section where the variable is used, double-click its entry in the list int the Find Symbol Results window. Normally, from your knowledge of using computers, you probably already know how to select, cut, and copy text. These two operations can be valuable to save code in Microsoft Visual Studio. This means that, if you have code you want to use in different sections, you can preserve it somewhere to access it whenever necessary. To save code to use over and over again, first type the code in any text editor, whether in Notepad, Microsoft Word, or the Code Editor of Microsoft Visual Studio. You can use code from any document where text can be copied, including a web page. Select that code and copy it to the clipboard. To preserve it, in Microsoft Visual Studio, display the Toolbox (on the main menu, you can click View -> Toolbox). Right-click an empty area on the Toolbox and click Paste: An alternative is to select the code, whether in the Code Editor or in a text editor. Then drag it and drop it on the Toolbox. In the same way, you can add different code items to the Toolbox. After pasting or adding the code to the Toolbox, it becomes available. To use that code, drag it from the Toolbox and drop it in the section of the Code Editor where you want to use it. As we will see throughout our lessons, there are many names you will use in your programs. After creating a name, in some cases you will have to change it. You can find where the name is and edit it. If the name is used in many places, you can continue looking for it and modify it. There is a chance you will make a mistake. If you are writing your code using a text editor, you can use the Edit -> Replace option of the main menu to find and replace every instance of that name. You can use the same approach in the Code Editor. Unfortunately, this technique works for only one file. If your project has many files and the name is used in those files, it would be cumbersome to remember to change the name in all of them. Microsoft Visual Studio makes it easy to find and change a name wherever it is used. Consider the following code: Module Exercise Public Sub Main() Dim nbr As Integer nbr = 148 System.Console.WriteLine(nbr) End Sub End Module To change the name of a variable, in the Code Editor, double-click the name of the variable and edit (change) it. The name will then have a small underline: If you position your mouse on it, a tag would appear and you can click the arrow to reveal a short menu: If you click the Rename option, all instances of the variable would be changed. If you create a long document that has many lines of code, in a certain section you may encounter a variable but you want to find out where it was declared. If you are using Microsoft Visual Studio, to access the place where a variable was declared: In both cases, the caret would jump to where the variable was declared. If you are using the Code Editor of Microsoft Visual Studio, if you create a long document that has many lines of code, if you want to jump to a certain line of code: This would display a dialog box. Enter the line number and click OK or press Enter. |Previous||Copyright © 2008-2010 FunctionX||Home|
http://www.functionx.com/visualbasic/fundamentals/Lesson05.htm
13
54
Exercise 4.1: To "capitalize" a string means to change the first letter of each word in the string to upper case (if it is not already upper case). For example, a capitalized version of "Now is the time to act!" is "Now Is The Time To Act!". Write a subroutine named printCapitalized that will print a capitalized version of a string to standard output. The string to be printed should be a parameter to the subroutine. Test your subroutine with a main() routine that gets a line of input from the user and applies the subroutine to it. Note that a letter is the first letter of a word if it is not immediately preceded in the string by another letter. Recall that there is a standard boolean-valued function Character.isLetter(char) that can be used to test whether its parameter is a letter. There is another standard char-valued function, Character.toUpperCase(char), that returns a capitalized version of the single character passed to it as a parameter. That is, if the parameter is a letter, it returns the upper-case version. If the parameter is not a letter, it just returns a copy of the parameter. Exercise 4.2: The hexadecimal digits are the ordinary, base-10 digits '0' through '9' plus the letters 'A' through 'F'. In the hexadecimal system, these digits represent the values 0 through 15, respectively. Write a function named hexValue that uses a switch statement to find the hexadecimal value of a given character. The character is a parameter to the function, and its hexadecimal value is the return value of the function. You should count lower case letters 'a' through 'f' as having the same value as the corresponding upper case letters. If the parameter is not one of the legal hexadecimal digits, return -1 as the value of the function. A hexadecimal integer is a sequence of hexadecimal digits, such as 34A7, FF8, 174204, or FADE. If str is a string containing a hexadecimal integer, then the corresponding base-10 integer can be computed as follows: value = 0; for ( i = 0; i < str.length(); i++ ) value = value*16 + hexValue( str.charAt(i) ); Of course, this is not valid if str contains any characters that are not hexadecimal digits. Write a program that reads a string from the user. If all the characters in the string are hexadecimal digits, print out the corresponding base-10 value. If not, print out an error message. Exercise 4.3: Write a function that simulates rolling a pair of dice until the total on the dice comes up to be a given number. The number that you are rolling for is a parameter to the function. The number of times you have to roll the dice is the return value of the function. You can assume that the parameter is one of the possible totals: 2, 3, ..., 12. Use your function in a program that computes and prints the number of rolls it takes to get snake eyes. (Snake eyes means that the total showing on the dice is 2.) Exercise 4.4: This exercise builds on Exercise 4.3. Every time you roll the dice repeatedly, trying to get a given total, the number of rolls it takes can be different. The question naturally arises, what's the average number of rolls? Write a function that performs the experiment of rolling to get a given total 10000 times. The desired total is a parameter to the subroutine. The average number of rolls is the return value. Each individual experiment should be done by calling the function you wrote for exercise 4.3. Now, write a main program that will call your function once for each of the possible totals (2, 3, ..., 12). It should make a table of the results, something like: Total On Dice Average Number of Rolls ------------- ----------------------- 2 35.8382 3 18.0607 . . . . Exercise 5: The sample program RandomMosaicWalk.java from Section 4.6 shows a "disturbance" that wanders around a grid of colored squares. When the disturbance visits a square, the color of that square is changed. The applet at the bottom of Section 4.7 shows a variation on this idea. In this applet, all the squares start out with the default color, black. Every time the disturbance visits a square, a small amount is added to the red component of the color of that square. Write a subroutine that will add 25 to the red component of one of the squares in the mosaic. The row and column numbers of the square should be passed as parameters to the subroutine. Recall that you can discover the current red component of the square in row r and column c with the function call Mosaic.getRed(r,c). Use your subroutine as a substitute for the changeToRandomColor() subroutine in the program RandomMosaicWalk2.java. (This is the improved version of the program from Section 4.7 that uses named constants for the number of rows, number of columns, and square size.) Set the number of rows and the number of columns to 80. Set the square size to 5. Exercise 6: For this exercise, you will write a program that has the same behavior as the following applet. Your program will be based on the non-standard Mosaic class, which was described in Section 4.6. (Unfortunately, the applet doesn't look too good on many versions of Java.) The applet shows a rectangle that grows from the center of the applet to the edges, getting brighter as it grows. The rectangle is made up of the little squares of the mosaic. You should first write a subroutine that draws a rectangle on a Mosaic window. More specifically, write a subroutine named rectangle such that the subroutine call statement will call Mosaic.setColor(row,col,r,g,b) for each little square that lies on the outline of a rectangle. The topmost row of the rectangle is specified by top. The number of rows in the rectangle is specified by height (so the bottommost row is top+height-1). The leftmost column of the rectangle is specifed by left. The number of columns in the rectangle is specified by width (so the rightmost column is left+width-1.) The animation loops through the same sequence of steps over and over. In one step, a rectangle is drawn in gray (that is, with all three color components having the same value). There is a pause of 200 milliseconds so the user can see the rectangle. Then the very same rectangle is drawn in black, effectively erasing the gray rectangle. Finally, the variables giving the top row, left column, size, and color level of the rectangle are adjusted to get ready for the next step. In the applet, the color level starts at 50 and increases by 10 after each step. You might want to make a subroutine that does one loop through all the steps of the animation. The main() routine simply opens a Mosaic window and then does the animation loop over and over until the user closes the window. There is a 1000 millisecond delay between one animation loop and the next. Use a Mosaic window that has 41 rows and 41 columns. (I advise you not to used named constants for the numbers of rows and columns, since the problem is complicated enough already.) If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/javajdktutorials/c4/exercises.shtml
13
72
The Rigid Body. A body in which the distance between any two points does not change due to the application of external forces is called a rigid body (figure 1). Motion of a Rigid Body. Various types of motions of a rigid body can be grouped mainly into three categories viz. Translation, fixed axis rotation and the general plane motion. A motion is said to be a translation if any straight line inside the body keeps the same direction during the motion. It may also be observed that in a translation all the particles forming the body move along parallel paths. If these paths are straight lines, the motion is said to be rectilinear translation (figure 2); if the paths are curved lines, the motion is a curvilinear translation (figure 3). Position of a point in the rigid body. The position of any point B of the rigid body in translation is described with respect to another point A of the rigid body (figur4). The position of B with respect to A is denoted by the position vector vB/A. Using vector addition. rB = rA + rB/A ...(1) Velocity of a point in the rigid body. Relationship between the instantaneous velocites of A and B is given by differentiating eq. (1) with respect to time. vB = v A ......(2) (Since0) Therefore, all the points of a rigid body have same velocity. Acceleration. Taking time derivative of equation (2) yields a relationship between accelerations of A and B. aA= aB . . . . . . . . . . . . . (3) Therefore, all the points of a rigid body have same acceleration. 3. Rotation About A Fixed Axis In this type of motion, the particles forming the rigid body move in parallel planes along circles centered on the same fixed axis (figure 5). Rotation should not be confused with of curvilinear translation. For example, the plate shown in figure 6(a) is in curvilinear translation, with all its particles moving along parallel circles, while the plate shown in figure 6(b) is in rotation, with all its particles moving along concentric circles having center of the point of suspension. Angular Position. At the instant shown, the angular position of the radial line r is defined by the angle measured between a fixed reference line and r. Here r extends normally from the axis of rotation at point O to a point P in the body (figure 7). Angular Displacement. The change in angular position d is called the angular displacement. d is a vector which is measured in degrees, radians or revolutions (1 rev = 2 rad). Direction of d is given by right hand rule (figure 7.) Angular Velocity. The time derivative of d is called the angular velocity is an axial and vector has a direction always along the axis of rotation. i.e. in the same direction asd (figure7). It is measured in rad/sec. Angular Acceleration. The angular acceleration a measures the time rate of change of the angular velocity. or = ...(6) (since = ) Eliminating dt from equation (4) and equation (5) yields d = d ...(7) Constant Angular Acceleration. If the angular acceleration of the body is constant i.e. = c and and are collinear the integration of equations (4), (5) and (7) gives = 0+ ct ...(8) = 0 + 0t + ct2 ...9) 2 = + 22 ( – 0 ) ...(10) here 0 and 0 are the initial values of the body’s angular position and angular velocity respectively. 4. Motion of Any point P of the Rigid Body in fixed axis Rotation As the rigid body rotates, the point P travels along circular path of radius r and centred at point O (figure 8), which lies on the axis of rotation. Velocity of Point P. In scaler form the velocity of point P of rigid body is given as v = r ...(11) the direction of v is tangent to the circular path (figure .8). In vector notations, the velocity of point P is given by v = × r ...(12) Acceleration of Point P. The acceleration of point P is expressed in normal and tangential components (figure 9) at = r ... (13) an = 2 r ... (14) In vector notations the total acceleration a of the point P is expressed as a = at + an ..(15) or a = × r – 2 r ...16) Sample Problem1. (Rotation about fixed axis) A cord is wrapped around a wheel which is initially at rest (figure A). If a force is applied to the cord and gives it an acceleration a = (4t)m/s2, where t is in seconds, determine as a function of time (a) the angular velocity of the wheel, and (b) the angular position of line OP in radians. Solution: Part (a). The wheel is subjected to rotation about a fixed axis passing through point O. Thus, point P on wheel has motion about a circular path, and therefore the acceleration of this point has both tangential and normal components. In particular, the tangential component is (ap)t = (4t)m/s2, since the cord is connected to the wheel and tangent to it at P. Hence the angular acceleration of the wheel is Using this result, the wheel’s angular velocity w can now be determined from = d/dt,* since this equation relates, t and . Integrating, with the initial condition that = 0 at t = 0, yields Part (b). Using this result, the angular position q of the radial line OP can be computed from = d/dt, since this equation relates ,and t. Integrating, with the initial condition = 0 at t = 0, we have Sample Problem 2. (Rotation about fixed axis) Disk A (figure A) starts from rest and through the use of motor begins to rotate with a constant angular acceleration of A = 2 rad/s2. If no slipping occurs between the disks, determine the angular velocity and angular acceleration of disk B just after A turns 10 revolutions. Solution: First we will convert the 10 revolutions to radians. Since there are 2 rad to one revolution, then Since A is constant, the angular velocity of A is then As shown in figure B the speed of the contacting point P on the rim of A is (+ ) vP = A rA = (15.9 rad/s)(0.6 m) = 9.54 m/s The velocity is always tangent to the path of motion; and since no slipping occurs between the disks, the speed of point P' on B is the same as the speed of P on A*. The angular velocity of B is therefore The tangential components of acceleration of both disks are also equal, since the disks are in contact with one another. Hence, from figure C. (ap)t = (ap')t Its is important notice that the normal components of acceleration (ap)nand (ap')n act in opposite directions, since the paths of motion for both points are different. Furthermore, (ap)n (ap')n since the magnitudes of these components depend on both the radius and angular velocity of each disk, i.e., (ap)n = rAand (ap)n = rB. Consequently, ap ap,. A flywheel 0.4 m in diameter is brought uniformly from rest up to a speed of 240 rpm in 2 sec. What is the velocity of a point on the rim 1 s after starting from rest ? (a) 0.2 m/s (b) 0.4 m/s (c) 0.6 m/s (d) 0.8 m/s A ball rolls 2 m across a flat car in a direction perpendicular to the path of the car. In the same time interval during which the ball is rolling, the car moves at a constant speed on the horizontal straight track for a distance of 2.5 m. What is the absolute displacement of the ball ? (a) 3.2 m (b) 1.6 m (c) 0.8 m (d) 0.4 m A rigid body is rotating at 180 rev/min about a line i – 2j – 2k. The origin is on the line. What is the magnitude of the linear velocity of a point (1 m, 1 m, 1 m) ? Load B is connected to a double pulley by one of the two inextensible cables (figure A). The motion of the pulley is controlled by the cable C, which has a constant acceleration of 0.225 m/s and an initial velocity of 0.3 m/s both directed to the right. What is the number of revolutions executed by the pulley in 2 s ? A rigid body is rotating at 5 rad/s about an axis through origin and with direction cosines 0.4, 0.6 and 0.8 with respect to x, y and z-axis respectively. What is the magnitude of velocity of a point in the body defined by the position vector r = – 2i + 3j – 4k with respect to the origin. (d) None of these For the system of connected bodies (figure B) the initial angular velocity of the compound pulley B is 6 rad per sec counterclockwise and weight D is decelerating at the constant rate of 4 cm/s2 . What distance will weight A travel before coming to rest ? (a) 6.5 cm (b) 9.5 cm (c) 11.5 cm (d) 13.5 cm When the angular velocity of a 4 cm diameter pulley is 3 rad per sec, the total acceleration of a point on its rim is 30 cm/s2 . Determine the angular acceleration of the pulley at this instant. (a) 12 rad/s2 (b) 10 rad/s2 (b) 8 rad/s2 (d) 6 rad/s2 Determine the horizontal component of the acceleration of point B on the rim of the flywheel (figure C). At the given position, = 4 rad per sec and= 12 rad per sec2 , both clockwise. A pulley has a constant angular acceleration of 12 rad per sec2 . When the angular velocity is 3 rad per sec, the total acceleration of a point on the rim of the pulley is 10 m/s2 . Compute the diameter of the pulley ? (a) 1/3 m (b) 2/3 m (c) 1 m (d) 4/3 m The step pulleys are connected by a crossed belt. If the angular acceleration of C is 2 rad per sec2, what time is required for A to travel 64 ft from rest ? D move while A moves 100 ft. (a) 2 sec (b) 3 sec (c) 4 sec (d) 6 sec
http://www.goiit.com/posts/show/0/content-kinematics-of-a-rigid-body-804173.htm
13
52
Global Warming Science - www.appinsys.com/GlobalWarming [last update: 2010/02/14] The Earth’s climate system is very complex and many attempts have been made to model it. There is an interaction of solar radiation and magnetic fields, land, ocean, atmosphere, clouds, gases released by anthropogenic processes (deforestation, agriculture, land use change, burning of carbon-based fuels) and natural processes (volcanoes, etc.). In this system, the sun provides the heating of the earth through solar radiation in various wavelengths. Some of the solar radiation is reflected by clouds, thus reducing the heating from solar radiation (analogy: cloudy days in summer are typically cooler than sunny days because the clouds block heat from the sun). Heat is re-radiated by the Earth’s surface. Some of this heat is absorbed by “greenhouse gases” and re-emitted in the atmosphere, thus contributing to warming the Earth (analogy: cloudy days in winter are typically warmer than sunny days because the clouds keep heat in). The greenhouse effect operates by inhibiting the cooling of the climate by reducing net outgoing radiation. The shorter wave radiation passes relatively unhindered by the CO2 to warm the Earth. The Earth re-radiates the energy in longer wave radiation (infrared, far-infrared) which is absorbed and reradiated by the CO2, causing atmospheric warming. The following figure provides a simplified conceptual overview of the process. From: UNEP/GRID-Arendal. Greenhouse effect. UNEP/GRID-Arendal Maps and Graphics Library. 2002. http://maps.grida.no/go/graphic/greenhouse_effect. The following figure shows the absorption of radiation by wavelength for H2O, CO2 as well as oxygen and ozone (O2+O3). See http://brneurosci.org/co2.html for a good explanation of the potential global warming effects of CO2. The temperature varies with altitude. The following figure provides a general indication of the variation of temperature with altitude and indicates the parts of the atmosphere referred to as the troposphere and the stratosphere. The stratosphere is warmer due to increased ozone levels absorbing ultraviolet radiation. The greenhouse gas (GHG) theory indicates that increasing GHGs should result in warming of the troposphere and cooling of the stratosphere. The temperature varies with altitude. The following figure provides a general indication of the variation of temperature with altitude and indicates the parts of the atmosphere referred to as the troposphere and the stratosphere. The stratosphere is warmer due to increased ozone levels absorbing ultraviolet radiation. The greenhouse gas (GHG) theory indicates that increasing GHGs should result in warming of the troposphere and cooling of the stratosphere. Temperature Variation By Altitude The most important greenhouse gases in Earth's atmosphere include water vapor (H2O), carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), ozone (O3), and the chlorofluorocarbons (CFCs). In addition to reflecting sunlight, clouds are also a major greenhouse substance. Water vapor and cloud droplets are in fact the dominant atmospheric absorbers. Water vapor is the most important greenhouse gas due to its abundance in the atmosphere. The relationship between CO2 and increased temperature has been demonstrated in laboratory experiments and shown to be a logarithmic relationship – i.e. one must keep doubling the concentration to achieve the same increment of warming. The effect of doubling the CO2 has been estimated to be approximately 0.7 C. However that does not take into account the presence of other greenhouse gases (GHG). Water vapor is the most prevalent GHG and the effect of increasing CO2 depends on the relative quantity of non-CO2 GHG. Thus in humid atmospheric conditions, CO2 contributes very little warming, whereas it could contribute more in dry atmospheric regions. Richard Lindzen (MIT Atmospheric Science Professor) states: “there is a much more fundamental and unambiguous check of the role of feedbacks in enhancing greenhouse warming that also shows that all models are greatly exaggerating climate sensitivity. Here, it must be noted that the greenhouse effect operates by inhibiting the cooling of the climate by reducing net outgoing radiation. However, the contribution of increasing CO2 alone does not, in fact, lead to much warming (approximately 1 deg. C for each doubling of CO2). The larger predictions from climate models are due to the fact that, within these models, the more important greenhouse substances, water vapor and clouds, act to greatly amplify whatever CO2 does. This is referred to as a positive feedback. It means that increases in surface temperature are accompanied by reductions in the net outgoing radiation – thus enhancing the greenhouse warming. ... Satellite observations of the earth’s radiation budget allow us to determine whether such a reduction does, in fact, accompany increases in surface temperature in nature. As it turns out, the satellite data from the ERBE instrument (Barkstrom, 1984, Wong et al, 2006) shows that the feedback in nature is strongly negative -- strongly reducing the direct effect of CO2 (Lindzen and Choi, 2009) in profound contrast to the model behavior.” [http://www.quadrant.org.au/blogs/doomed-planet/2009/07/resisting-climate-hysteria] The following figure shows the estimated radiative forcing components as defined by the IPCC in the latest scientific basis report (May 2007). The report states: “Energy consumption by human activities, such as heating buildings, powering electrical appliances and fuel combustion by vehicles, can directly release heat into the environment. Anthropogenic heat release is not an RF, in that it does not directly perturb the radiation budget; the mechanisms are not well identified and so it is here referred to as a non-initial radiative effect.” Note also in the following figure the large uncertainty bar for aerosols and cloud effects, which are poorly understood and thus not well modeled. A good explanation of climate sensitivity is provided by Nir Shaviv, describing how the cosmic ray flux and effect on cloud cover is insufficiently modeled. [http://www.sciencebits.com/OnClimateSensitivity] Estimated Radiative Forcing Components (Figure FAQ 2.1 – 2 in the IPCC AR4) Data regarding volcanic aerosols is very sparse. A recent NASA study found that the levels of cooling volcanic aerosols has been declining in recent decades, as shown in the following figure. (Global 'Sunscreen' Has Likely Thinned, Report NASA Scientists 3/15/07) [http://www.nasa.gov/centers/goddard/news/topstory/2007/aerosol_dimming.html] The National Research Council (National Academy of Sciences) in their study “Climate Change Science: An Analysis of Some Key Questions, said “The monitoring of aerosol properties has not been adequate to yield accurate knowledge of the aerosol climate influence”. (Notice in the above IPCC radiative forcing components figure, volcanic aerosols do not appear in the Natural Processes section of the figure.) Atmospheric Volcanic Aerosols 1981 – 2006 Showing General Declining Trend Greenhouse Gas Sources The sources of greenhouse gases (GHG) come from various sectors including transportation, industrial processes, power generation for residential consumption, agriculture and deforestation. According to the United Nations Food and Agriculture Organization (FAO), deforestation accounts for 25 to 30 percent of the release of GHG [http://www.fao.org/newsroom/en/news/2006/1000385/index.html]. The report states: “Most people assume that global warming is caused by burning oil and gas. But in fact between 25 and 30 percent of the greenhouse gases released into the atmosphere each year – 1.6 billion tonnes – is caused by deforestation.” From 1990 to 2000, the net forest loss was 8.9 million hectares per year. From 2000 to 2005, the net forest loss was 7.3 million hectares per year. The ten countries with the largest net loss of forest per year (2000 – 2005) are: Brazil, Indonesia, Sudan, Myanmar, Zambia Tanzania, Nigeria, Democratic Republic of the Congo, Zimbabwe, and Venezuela (combined loss of 8.2 million hectares per year). The ten countries with the largest net gain of forest per year (2000 – 2005) are: China, Spain, Viet Nam, United States, Italy, Chile, Cuba, Bulgaria, France and Portugal (combined gain of 5.1 million hectares per year). [http://www.fao.org/forestry/site/28821/en/] The following figure (left) shows a generalized source of GHG from various sources. However, this does not include deforestation (the number one cause of GHG). Various studies show various differing contributions by sector, since not all consider the same factors. The right-hand figure shows emissions by sector from another source using 1996 IPCC data [http://www.idosi.org/aejaes/jaes3(5)/1.pdf]. These are global estimates and do not reflect the fact that GHG contributions by sector vary regionally (for example, in Washington State where a large portion of power generation is hydroelectric, and where there is no net deforestation). Estimated Greenhouse Gas Emissions by Sector from Two Sources The above figure ignores one of the largest sources of GHG – deforestation and shows a smaller impact other anthropogenic land use change effects than most studies. The following figure shows the effect of land-use change on atmospheric CO2 [http://cdiac.ornl.gov/trends/landuse/houghton/houghton.html] Annual Effect of Land-Use Change on Atmospheric CO2 The following figure shows GHG by type (pie chart b) and sector (pie chart c) from the IPCC AR4 SPM [http://www.ipcc.ch/pdf/assessment-report/ar4/syr/ar4_syr_spm.pdf]. Note that CO2 fossil fuel use is only 56.6 % of GHG. GHG Emissions by Type and Sector from IPCC AR4 SPM The following figure shows the net flux of carbon to the atmosphere due to land use change. The United States has the largest land use change carbon sink in the world – i.e. while much of the world is burning its forests, the US is absorbing the carbon from the atmosphere. This figure shows: “Cumulative Emissions of C02 From Land-Use Change measures the total mass of carbon absorbed or emitted into the atmosphere between 1950 and 2000 as a result of man-made land use changes (e.g.- deforestation, shifting cultivation, vegetation re-growth on abandoned croplands and pastures). Positive values indicate a positive net flux ("source") of CO2; for these countries, carbon dioxide has been released into the atmosphere as a result of land-use change. Negative values indicate a negative net flux ("sink") of CO2; in these countries, carbon has been absorbed as a result of the re-growth of previously removed vegetation.” [http://earthtrends.wri.org/pdf_library/maps/co2_landuse.pdf]. The same report also states: “While the majority of global CO2 emissions are from the burning of fossil fuels, roughly a quarter of the carbon entering the atmosphere is from land-use change.” Becoming vegetarian would be more efficient in reducing greenhouse gases than driving a hybrid car. The United Nations Food and Agriculture Organization (FAO) released a report in November 2006 [http://www.fao.org/newsroom/en/news/2006/1000448/index.html ] that states: “the livestock sector generates more greenhouse gas emissions as measured in CO2 equivalent – 18 percent – than transport…. the livestock sector accounts for 9 percent of CO2 deriving from human-related activities, but produces a much larger share of even more harmful greenhouse gases. It generates 65 percent of human-related nitrous oxide, which has 296 times the Global Warming Potential (GWP) of CO2…it accounts for 37 percent of all human-induced methane (23 times as warming as CO2) ” [http://www.un.org/apps/news/story.asp?NewsID=20772&Cr=global&Cr1=environm ] A study published in 2008 reports that China (which was excluded from the Kyoto requirements) became the largest emitter of CO2 from fossil fuel combustion and cement production in 2006. (Gregg, J. S., R. J. Andres, and G. Marland, “China: Emissions pattern of the world leader in CO2 emissions from fossil fuel consumption and cement production”, Geophysical Research Letters 35, 2008) [http://www.agu.org/pubs/crossref/2008/2007GL032887.shtml]. The following figures are from that study. The left-hand figure compares the US annual carbon emissions with China’s since 1950. The right-hand figure compares the monthly carbon for 2001 – 2007. The study states: “the annual emission rate in the US has remained relatively stable between 2001–2006 while the emission rate in China has more than doubled.” The atmospheric CO2 has been shown to lag the temperature in the past warming cycles, as shown in the following figure (From http://calspace.ucsd.edu/virtualmuseum/climatechange2/07_2.shtml). Vostok Ice Core Temperature and CO2 Trends for Past 450,000 Years The IPCC AR4 Scientific Basis report, Part 6 (May 2007), makes the following statements: Many scientific studies have shown that CO2 increase follows temperature increase in the pre-historical records. A few examples: Many scientists disagree that past CO2 has been constantly as low as the IPCC states. An examination of the history of CO2 measurement is provided at http://www.co2web.info/ESEF3VO2.pdf Reconstructions of past CO2 (prior to continuous measurements) have been made from various sources. The IPCC uses reconstruction from ice cores. Other reconstructions show different trends. The following figure shows CO2 reconstruction from pine needle stomatal density. [http://icecap.us/images/uploads/200705-03AusIMMcorrected.pdf] The IPCC rejected all available historical measurements of CO2, except Antarctic ice cores, because the measurements did not match their preferred theory: “more than 90,000 direct measurements of CO2 in the atmosphere, carried out in America, Asia, and Europe between 1812 and 1961, with excellent chemical methods (accuracy better than 3%), were arbitrarily rejected”. Even the ice core measurements were adjusted to match their CO2 story line, as shown in the following figure. [http://www.warwickhughes.com/icecore/zjmar07.pdf] A CO2 reconstruction study based on oak tree leaf stomata in the Netherlands (van Hoof et al “A Role for Atmospheric CO2 in Preindustrial Climate Forcing”, Proceedings of the US National Academy of Sciences, 2007) shows the following figure comparing the study findings (red line) with the IPCC findings (blue line) in terms of the CO2 climate forcing. [http://www.pnas.org/content/105/41/15815.full.pdf+html] The study states: “Comparable to other stomata-based records, reconstructed preindustrial CO2 levels fluctuate between 319.2 and 292.3 ppmv with an average value of 311.4 ppmv … It should be noted that, in general, CO2 data derived from stomatal frequency analysis have higher average values (300 ppmv) compared with the IPCC baseline ” See also: Beck: http://www.biokurs.de/treibhaus/180CO2/08_Beck-2.pdf The NOAA Earth System Research Laboratory – Global Monitoring Division [http://www.esrl.noaa.gov/gmd/aggi/] provides data from a network of CO2 monitoring stations around the world (with data for Mauna Loa starting in 1970). The following figures show the location of the monitoring locations (left) and the global average CO2 concentration from these sites (right). NOAA/ESRL CO2 Monitoring Locations (Left) and Global Average CO2 Concentration (Right) The following figure shows the IPCC graph of atmospheric CO2 as measured at Mauna Loa, Hawaii (left), while the right-hand graph compares the CO2 at Mauna Loa and the South Pole. They show a similar trend in slope. In fact the CO2 plots from any of the CO2 stations in the NOAA database show a similar CO2 trend. It can be seen from the figure below that the CO2 is greater in the summer than the winter (the CO2 is not causing seasons, but it is a response to the seasonal change in temperature). Comparing the various CO2 trends available from the NOAA database shows a consistent trend in atmospheric CO2 rise around the world (as illustrated by comparing the figures shown above and below). But the temperature trends vary greatly by region. Left: Atmospheric CO2 at Mauna Loa (Figure 2.3 in the IPCC AR4) Right: Atmospheric CO2 at Mauna Loa (Red) and at South Pole (Blue) from the NOAA Database The temperature trend at Mauna Loa shows no correspondence with the CO2 trend. The following figure shows the Mauna Loa CO2 along with the temperature trend from the nearest station in the NASA GISS database (Hilo, Hawaii) clearly illustrating the lack of correspondence between the two. Atmospheric CO2 at Mauna Loa (Figure 2.3 in the IPCC AR4) with Temperature Trend from the NASA GISS Database for Hilo. CO2 – Ocean Water Relationship The following figure shows the monthly variation in CO2 at Mauna Loa (left) and the solubility of CO2 in water as a function of temperature [from http://wattsupwiththat.com/2007/11/04/guest-weblog-co2-variation-by-jim-goodridge-former-california-state-climatologist/]. Seasonal changes in CO2 are a result of seasonal CO2 sources and sinks in the global carbon cycle. The ocean temperature plays a large role in this. The following figure compares the atmospheric CO2 and ocean surface CO2 at a station in Hawaii. [http://hahana.soest.hawaii.edu/hot/trends/trends.html] It shows the inverse annual correlation between atmospheric and sea surface CO2 – within each year the cycle is opposite. CO2 – Temperature Observations The following figure compares satellite-based lower troposphere temperature (blue) with CO2 growth rate (black) for 1979 - 2008. The temperature changes precede the CO2 growth rate changes. The second figure shows a regression of CO2 growth as a function of temperature calculated from points in the first figure [from http://icecap.us/images/uploads/FlaticecoreCO2.pdf]. CO2 – IPCC Modeling Problems The conclusion that the current regional warming trend is significant and caused mainly by anthropogenic CO2, is a result of theoretical climate models (General Circulation Models - GCMs) in which the human-defined models are only able to reproduce current global temperature trends since 1970 by increasing the CO2 levels. The availability of the CRU emails since November 2009 has shed further light on some of the modeling issues. For example: Tom Wigley (senior scientist at NCAR) to Michael Mann (creator of the hockey stick graph) [Oct 14, 2009]: “The Figure you sent is very deceptive. As an example, historical runs with PCM look as though they match observations -- but the match is a fluke. PCM has no indirect aerosol forcing and a low climate sensitivity -- compensating errors. In my (perhaps too harsh) view, there have been a number of dishonest presentations of model results by individual authors and by IPCC.” [http://www.eastangliaemails.com/emails.php?eid=1057&filename=1255553034.txt] The IPCC AR4 Scientific Basis report (Part 6) states: “Climate models are used to simulate episodes of past climate... Models allow the linkage of cause and effect in past climate change to be investigated. Models also help to fill the gap between the local and global scale in palaeoclimate, as palaeoclimatic information is often sparse, patchy and seasonal. For example, long ice core records show a strong correlation between local temperature in Antarctica and the globally mixed gases CO2 and methane, but the causal connections between these variables are best explored with the help of models.” So, models in which the causal connections are programmed in, are used to explore the causal connections. One major problem is that Antarctica does not match the models and is now ignored by the IPCC. The following figure is from the IPCC AR4 report (2007). It does not show modeling of Antarctica, because Antarctica does not fit the models. From IPCC AR 4 Figure 9.6 The following figure (left) shows modeled temperature change from the IPCC TAR report (2001). The models show warming in Antarctica with cooling around the Antarctic Peninsula and in the adjacent Weddell Sea – exactly the opposite of the observed trend. The following figure (right) shows the observed temperature trend in the “cooling” area. Left: From IPCC TAR Figure 9.2 – Modeled temperature differences from 1975 to 1995 to the first decade in the 21st century. Right: From NASA / GISS database. Unlike the northern hemisphere, temperature measurements in Antarctica only started in the 1950’s, and there are very few stations covering a 40-year period to the present. The following figure shows two of the available non-peninsula temperature stations’ measurements are shown in the following figures (plots from stations in the NASA / GISS database). [http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=700896640008&data_set=1&num_neighbors=1 ] Typical Antarctica Temperature Station Trends For a more detailed regional study of Antarctica, see: http://www.appinsys.com/GlobalWarming/RS_Antarctica.htm The NOAA Earth System Research Laboratory – Global Monitoring Division maintains a network of CO2 monitoring stations around the world. [http://www.esrl.noaa.gov/gmd/aggi/]. The following figures compare the recent CO2 trends at Palmer Station (on the Antarctic Peninsula) and the South Pole. There is virtually no difference between the two locations, although there is a substantial temperature difference as seen in the previous temperature trend graphs. The next figure compares CO2 and temperature trend at the South Pole showing the lack of correlation between the two. CO2 Trends at Palmer Station and at the South Pole – Same CO2 Trends, Very Different Temperature Trends Combining CO2 and South Pole Temperature Trends – No Correlation The greenhouse hypothesis suggests the warming would be greatest in the atmosphere (troposphere) and that the warming would be significant both day and night. It would also be greatest in the polar regions because gases like CO2 are most effective at trapping the heat in very cold temperatures. The reason that the warming should be greatest at the polar regions is due to the following: CO2 in the atmosphere absorbs and re-emits infra-red radiation in distinctive wavebands, particularly around 12 - 18 microns. Radiation at other wavelengths simply passes through the atmosphere without being intercepted by CO2. The wavelength of infrared radiation from the earth's surface depends on the temperature of the surface. All bodies emit infrared over a wide band of wavelengths, but peak at a `dominant wavelength' determined by the temperature of the emitting surface. For example, an object with a temperature of 32°C will radiate most intensely at 9.5 microns. At 15°C (the mean surface temperature of the earth), the dominant wavelength will be 10 microns. At -25°C, it becomes 11.7 microns, and at -50°C becomes 13 microns. The problem is that the observations do not match the CO2 hypothesis. The IPCC 2007 Report Chapter 9 – Understanding and Attributing Climate Change [http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_Print_Ch09.pdf] provides a climate model based simulation of the expected CO2 “spatial signature” of all forcings including anthropogenic CO2 (left-hand figure below shows degrees change per decade). However, a study of actual data from radiosonde data shows a non-CO2 based signature [http://www.climatescience.gov/Library/sap/sap1-1/finalreport/sap1-1-final-chap5.pdf]. The models do not match reality. In reference to this, Richard Lindzen (MIT Atmospheric Sciences Professor) stated: “surface warming should be accompanied by warming in the tropics around an altitude of about 9km that is about 2.5 times greater than at the surface. Measurements show that warming at these levels is only about 3/4 of what is seen at the surface, implying that only about a third of the surface warming is associated with the greenhouse effect, and, quite possibly, not all of even this really small warming is due to man (Lindzen, 2007, Douglass et al, 2007). This further implies that all models predicting significant warming are greatly overestimating warming. This should not be surprising (though inevitably in climate science, when data conflicts with models, a small coterie of scientists can be counted upon to modify the data. Thus, Santer, et al (2008), argue that stretching uncertainties in observations and models might marginally eliminate the inconsistency. That the data should always need correcting to agree with models is totally implausible and indicative of a certain corruption within the climate science community).” [http://www.quadrant.org.au/blogs/doomed-planet/2009/07/resisting-climate-hysteria] Trends in degrees per decade – left: IPCC CO2-based trend; right: actual data A study comparing the models to observations from satellites and balloons (1979-2004) also shows a problem with the models. The following figure is from the study. “A comparison of tropical temperature trends with model predictions”, by Douglass, D.H., J.R. Christy, B.D. Pearson, and S.F. Singer, 2007 - International Journal of Climatology. [http://www.scribd.com/doc/904914/A-comparison-of-tropical-temperature-trends-with-model-predictions]. The models exhibit the CO2 theory of most warming occurring in the troposphere. However, the satellite and balloon based observations show warming only at the surface of the earth. The report stated: “Model results and observed temperature trends are in disagreement in most of the tropical troposphere, being separated by more than twice the uncertainty of the model mean. In layers near 5 km, the modelled trend is 100 to 300% higher than observed, and, above 8 km, modelled and observed trends have opposite signs. … On the whole, the evidence indicates that model trends in the troposphere are very likely inconsistent with observations that indicate that, since 1979, there is no significant long-term amplification factor relative to the surface. If these results continue to be supported, then future projections of temperature change, as depicted in the present suite of climate models, are likely too high.” A 2009 paper states: “There appears to be something fundamentally wrong with the way temperature and carbon are linked in climate models” [http://www.rice.edu/nationalmedia/news2009-07-14-globalwarming.shtml] In a 2008 paper published by Engel et al in Nature Geoscience [http://www.sciencedaily.com/releases/2008/12/081215111305.htm] found that “Most atmospheric models predict that the rate of transport of air from the troposphere to the above lying stratosphere should be increasing due to climate change. … an international group of researchers has now found that this does not seem to be happening. On the contrary, it seems that the air masses are moving more slowly than predicted. … Due to the results presented now, the predictions of atmospheric models must be re-evaluated.” In an assessment of the IPCC modeling, a paper by: Bellamy, D. and Barrett, J. (2007). “Climate stability: an inconvenient proof”, (Proceedings of the Institution of Civil Engineers – Civil Engineering, 160, 66-72) states: “The climate system is a highly complex system and, to date, no computer models are sufficiently accurate for their predictions of future climate to be relied upon.” In another review of IPCC modeling (Carter, R.M. (2007). “The myth of dangerous human-caused climate change” The Aus/MM New Leaders Conference, Brisbane May 3, 2007) Carter examined evidence on the predictive validity of the general circulation models (GCMs) used by the IPCC scientists. He found that “while the models included some basic principles of physics, scientists had to make “educated guesses” about the values of many parameters because knowledge about the physical processes of the earth’s climate is incomplete. In practice, the GCMs failed to predict recent global average temperatures as accurately as simple curve-fitting approaches. They also forecast greater warming at higher altitudes in the tropics when the opposite has been the case.” A 2007 study by Douglass and Christy published in the Royal Meteorological Society’s International Journal of Climatology [http://www.physorg.com/news116592109.html] found that the climate models do not match the data for the tropical troposphere. ““When we look at actual climate data, however, we do not see accelerated warming in the tropical troposphere. Instead, the lower and middle atmosphere are warming the same or less than the surface. For those layers of the atmosphere, the warming trend we see in the tropics is typically less than half of what the models forecast.””. A previous study cited in the same article blamed the data instead of the models! A 2008 study “On the Credibility of Climate Predictions” (D. Koutsoyiannis, A. Efstradiadis, N. Mamassis & A. Christofides, Department of Water Resources, Faculty of Civil Engineering, National Technical University of Athens, Greece) states: “Geographically distributed predictions of future climate, obtained through climate models, are widely used in hydrology and many other disciplines, typically without assessing their reliability. Here we compare the output of various models to temperature and precipitation observations from eight stations with long (over 100 years) records from around the globe. The results show that models perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.” [http://www.atypon-link.com/IAHS/doi/pdf/10.1623/hysj.53.4.671] Increasing atmospheric CO2 does not by itself result in significant warming. The climate models assume a significant positive feedback of increased water vapor in order to amplify the CO2 effect and achieve the future warming reported by the IPCC. According to the models, as the Earth warms more water evaporates from the ocean, and the amount of water vapor in the atmosphere increases. Since water vapor is the main greenhouse gas, this leads to a further increase in the atmospheric temperature. The models assume that changes in temperature and water vapor will result in a constant relative humidity (i.e. as temperatures increase, the specific humidity increases, keeping the relative humidity constant. This is one of the most controversial aspects of the models. Some studies say that the positive feedback is correct, others say not. Models that include water vapor feedback with constant relative humidity predict the Earth's surface will warm more than twice as much over the next 100 years as models that contain no water vapor feedback. According to the IPCC [http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter3.pdf] “Water vapour is also the most important gaseous source of infrared opacity in the atmosphere, accounting for about 60% of the natural greenhouse effect for clear skies, and provides the largest positive feedback in model projections of climate change.“ A 2004 NASA study using satellite humidity data found that “The increases in water vapor with warmer temperatures are not large enough to maintain a constant relative humidity” resulting in overestimation of temperature increase. [http://www.nasa.gov/centers/goddard/news/topstory/2004/0315humidity.html] MIT’s Richard Lindzen (the Alfred P. Sloan Professor of Meteorology at MIT) argues that the IPCC models have not only overestimated warming due to positive water vapor feedback, they also have the sign wrong: “Our own research suggests the presence of a major negative feedback involving clouds and water vapor, where models have completely failed to simulate observations (to the point of getting the sign wrong for crucial dependences). If we are right, then models are greatly exaggerating sensitivity to increasing CO2.” [http://meteo.lcd.lu/globalwarming/Lindzen/Lindzen_testimony.html] He also stated: “the way current models handle factors such as clouds and water vapor is disturbingly arbitrary. In many instances the underlying physics is simply not known. In other instances there are identifiable errors. … current models depend heavily on undemonstrated positive feedback factors to predict high levels of warming.” [http://www.cato.org/pubs/regulation/regv15n2/reg15n2g.html] Roy Spencer (Team Leader, Advanced Microwave Scanning Radiometer – Earth Observing System (AMSR-E), NASA) has a presentation providing evidence that there is net negative feedback due to water vapor: www.ghcc.msfc.nasa.gov/AMSR/meetings2008/monday14july/spencer_precipitation_microphysics.ppt A study of model feedbacks “Validating and Understanding Feedbacks in Climate Models” (D-Z. Sun, T. Zhang, and Y. Yu, NOAA-CIRES/Climate Diagnostics Center) states: “The models tend to overestimate the positive feedback from water vapor in El Nino warming. … [and] tend to underestimate the negative feedback from cloud albedo in El Nino warming.” Another paper by the same authors concludes: “The extended calculation using coupled runs confirms the earlier inference from the AMIP runs that underestimating the negative feedback from cloud albedo and overestimating the positive feedback from the greenhouse effect of water vapor over the tropical Pacific during ENSO is a prevalent problem of climate models“ [http://climatesci.org/2008/05/13/tropical-water-vapor-and-cloud-feedbacks-in-climate-models-a-further-assessment-using-coupled-simulations-by-de-zheng-sun-yongqiang-yu-and-tao-zhang] See www.appinsys.com/GlobalWarming/WaterVapor.htm for more details on the problem of water vapor not cooperating with the CO2 based theory. The National Research Council (National Academy of Sciences) produced a study called “Climate Change Science: An Analysis of Some Key Questions” [http://books.nap.edu//html/climatechange/]. Here are a couple of statements from that report: The sun provides the energy that warms the earth. And yet according to the NOAA National Climatic Data Center [http://www.ncdc.noaa.gov/oa/climate/globalwarming.html ] “Our understanding of the indirect effects of changes in solar output and feedbacks in the climate system is minimal”. The importance of fluctuations and trends in solar inputs in affecting the climate is inadequately modeled. Although the sun exhibits varies types of energy related events (sunspots, solar flares, coronal mass ejections), sunspots have been observed and counted for the longest amount of time. A 2007 paper by Syun-Ichi Akasofu at the International Arctic Research Center (University of Alaska Fairbanks) provides an analysis of warming trends in the Arctic. [http://www.iarc.uaf.edu/highlights/2007/akasofu_3_07/index.php ] They analyzed the capability of climate models (GCMs) to reproduce the past temperature trends of the Arctic (shown in the following figure): “we asked the IPCC arctic group (consisting of 14 sub-groups headed by V. Kattsov) to “hindcast” geographic distribution of the temperature change during the last half of the last century. To “hindcast” means to ask whether a model can produce results that match the known observations of the past; if a model can do this, we can be much more confident that the model is reliable for predicting future conditions … Ideally, the pattern of change modeled by the GCMs should be identical or very similar to the pattern seen in the measured data. We assumed that the present GCMs would reproduce the observed pattern with at least reasonable fidelity. However, we found that there was no resemblance at all.” Model vs Observed temperature Changes [from Akasofu, above] The authors’ conclusions: “only a fraction of the present warming trend may be attributed to the greenhouse effect resulting from human activities. This conclusion is contrary to the IPCC (2007) Report, which states that “most” of the present warming is due to the greenhouse effect. One possible cause of the linear increase may be that the Earth is still recovering from the Little Ice Age. It is urgent that natural changes be correctly identified and removed accurately from the presently on-going changes in order to find the contribution of the greenhouse effect… The fact that an almost linear change has been progressing, without a distinct change of slope, from as early as 1800 or even earlier (about 1660, even before the Industrial Revolution), suggests that the linear change is natural change” A recent paper studying the effect of “brown clouds” (caused by biomass burning) on warming in Asia (Ramanathan, V., M.V. Ramana, G. Roberts, D. Kim, C. Corrigan, C. Chung, and D. Winker, 2007. “Warming trends in Asia amplified by brown cloud solar absorption”. Nature, 448, 575-578) concludes “atmospheric brown clouds contribute as much as the recent increase in anthropogenic greenhouse gases to regional lower atmospheric warming trends”. The University of Alabama at Huntsville provides monthly plots of worldwide temperature anomalies for the troposphere since 2000 [http://climate.uah.edu/]. The following figure is from UAH and shows the temperature trend (degrees per decade) for 1978 to 2006. According to the CO2 theory, warming should be occurring over both poles – but this is not happening. Recent studies are showing that black carbon (soot) plays a larger role than CO2 in causing Arctic warming. A 2008 Cornell University report “Global Warming Predictions are Overestimated, Suggests Study on Black Carbon” [http://www.news.cornell.edu/stories/Nov08/SoilBlackCarbon.kr.html]. The report states: “As a result of global warming, soils are expected to release more carbon dioxide, the major greenhouse gas, into the atmosphere, which, in turn, creates more warming. Climate models try to incorporate these increases of carbon dioxide from soils as the planet warms, but results vary greatly when realistic estimates of black carbon in soils are included in the predictions, the study found. … black carbon can take 1,000-2,000 years, on average, to convert to carbon dioxide. … the researchers found that carbon dioxide emissions from soils were reduced by about 20 percent over 100 years, as compared with simulations that did not take black carbon's long shelf life into account. The findings are significant because soils are by far the world's largest source of carbon dioxide, producing 10 times more carbon dioxide each year than all the carbon dioxide emissions from human activities combined. Small changes in how carbon emissions from soils are estimated, therefore, can have a large impact.” The following figure shows the temperature response around the world due to black carbon from research at the University of California, Irvine [http://www.sciencedaily.com/releases/2007/06/070606113327.htm]. The global pattern matches the global temperature changes shown above more closely than does the modeled results of CO2 influence. The atmospheric CO2 generally has a low correlation with temperature. The following figure shows the global temperatures and CO2 from 1998 to 2008 (comparing the satellite-measured lower troposphere temperature and the Hadley Climatic research Unit data (used by IPCC). [http://intellicast.com/Community/Content.aspx?a=127]. While CO2 has steadily increased over the last decade, temperatures have not. A 2008 study of the satellite-era temperature data (Christy & Douglass: “Limits on CO2 Climate Forcing from Recent Temperature Data of Earth”) [http://arxiv.org/ftp/arxiv/papers/0809/0809.0581.pdf]. “The recent atmospheric global temperature anomalies of the Earth have been shown to consist of independent effects in different latitude bands. The tropical latitude band variations are strongly correlated with ENSO effects. …The effects in the northern extratropics are not consistent with CO2 forcing alone … These conclusions are contrary to the IPCC statement: “[M]ost of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”” They found that the underlying trend that may be due to CO2 was 0.07 degrees per decade. The following two figures are from the above Christy & Douglass study. The first (left) shows the satellite-based temperature anomalies for the Tropics (red), globe (black), northern extratropics (blue) and southern extratropics (green). The second figure (right) shows the correlation between the tropical temperatures and the ENSO3.4 (El Nino SSTs for area 3.4) There are many scientific studies on the atmospheric residence of CO2, with many disagreements (i.e. the science is not settled). Many studies show a residence of 5 to 15 years (although the IPCC claims that it’s 100-200 years). An example:“Atmospheric CO2 residence time and the carbon cycle : Global warming” [http://cat.inist.fr/?aModele=afficheN&cpsidt=4048904]: “An atmospheric CO2 residence time is determined from a carbon cycle which assumes that anthropogenic emissions only marginally disturb the preindustrial equilibrium dynamics of source/atmosphere/sink fluxes. This study explores the plausibility of this concept, which results in much shorter atmospheric residence times, 4-5 years, than the magnitude larger outcomes of the usual global carbon cycle models which are adjusted to fit the assumption that anthropogenic emissions are primarily the cause of the observed rise in atmospheric CO2. The continuum concept is consistent with the record of the seasonal photosynthesis swing of atmospheric CO2 which supports a residence time of about 5 years, as also does the bomb C14 decay history.” This link: http://folk.uio.no/tomvs/esef/ESEF3VO2.htm provides a list of published studies showing CO2 residence times as listed below. See that reference for details on this. The following figure compares maximum atmospheric CO2 residence time from various studies [http://c3headlines.typepad.com/.a/6a010536b58035970c0120a5e507c9970c-pi] NOAA had a “Weather School” “Learning Lesson” web page with a CO2 experiment (it has since been removed but can still be viewed at: [http://web.archive.org/web/20060129154229/http://www.srh.noaa.gov/srh/jetstream/atmos/ll_gas.htm]). The web page is shown below (originally at: [http://www.srh.noaa.gov/srh/jetstream/atmos/ll_gas.htm] – removed in Nov. 2009) [Red highlighting added.] CO2 – Positive Effects The positive effects of increased atmospheric CO2 are ignored in the alarmist scare stories. The UN periodically produces an assessment of the worldwide ozone depletion. The most recent report: WMO/UNEP: “Scientific Assessment of Ozone Depletion: 2006” by the Scientific Assessment Panel of the Montreal Protocol on Substances that Deplete the Ozone Layer [http://www.wmo.ch/pages/prog/arep/gaw/reports/ozone_2006/pdf/exec_sum_18aug.pdf] states: “Model simulations suggest that changes in climate, specifically the cooling of the stratosphere associated with increases in the abundance of carbon dioxide, may hasten the return of global [(60°S-60°N)] column ozone to pre-1980 values by up to 15 years”. Perhaps CO2 isn’t all-bad. Studies of crop growth rates under various concentrations of CO2 also show a positive effect of the current increase in atmospheric CO2. The following figures show an example. The USDA provides agricultural productivity data [http://www.ers.usda.gov/Data/AgProductivity/table03.xls] listing state-by-state yearly data. The data has been graphed by David Archibald [http://icecap.us/images/uploads/STATESPRODUCTIVITY.JPG] and is shown below for 1960 to 2005 for several states. I have added the thick red line showing atmospheric CO2 at Mauna Loa over the same time period (CO2 graph from http://www.esrl.noaa.gov/gmd/ccgg/trends/co2_data_mlo.html). Agricultural Productivity for Six States, Plus Atmospheric CO2 (Red – Scale at right) The following figure is from the United Nations UNEP [http://maps.grida.no/go/graphic/losses-in-land-productivity-due-to-land-degradation] showing a substantial increase in global productivity from 1981 – 2003 (interestingly, the UNEP’s caption was “losses in land productivity due to land degradation” – typical of the UN’s cup-half-empty viewpoint). The following figure is from a study “Long Term Monitoring of Vegetation Greenness from Satellites” A study of tree growth in Maryland indicates that “forests in the Eastern United States are growing faster than they have in the past 225 years” and is attributed to increased CO2 [http://sercblog.si.edu/?p=466]
http://www.appinsys.com/GlobalWarming/GW_Part5_GreenhouseGas.htm
13
78
2. Now, present theequation x2 - 2x - 6 = 0. Ask the students how to solve an equation like this for a positive root, since it does not factor. Introduce New Material: 1. Introduce the concept of “completing the square.” The goal of completing the is to manipulate an equation into one that factors nicely, like the examples Begin with an easier example (x2 + 10x - 39 = 0), one that can be factored nicely , so that students can solve the problems both ways, to see that they get the same 2. First, give students a brief history of the “completing the square method, formulated by al-Khwārizmī when algebra was invented (information found on pages 1-2 of the “Islamic Mathematics” information packet). Then, follow the steps pages 3-5 of the “Islamic Mathematics” information packet. These teach students complete the square using the method of al-Khwārizmī. It might be easiest for students to use the second method (pp 4-5), but both should be presented. 3. Show the solution to the example problem (x2 + 10x - 39 = 0) by factoring so students can recognize that factoring and completing the square methods both the same solution. 1. Pass out the Completing the Square worksheet, and help students to complete problem (x2 - 2x - 6 = 0) by completing it on the board. Independent Practice: 1. Have students work in groups of 2-3 people to complete the rest of the (Make sure the students are keeping their plus and minuses the same during completing the square, depending on the sign in the original problem!) Walk the room to help students who have trouble completing any of the problems. Closing / Assessment: 1. For homework, assign problems similar to these (from a textbook or asking students to solve the problems either by factoring or by completing the Or, have students complete side 1 of the worksheet in class and side 2 for Math 20 STUDY GUIDE To the students: When you study Algebra, the material is presented to you in a logical sequence. Many ideas are developed, left, and then returned to when your knowledge is Many different kinds of problems have similar instructions. This presents great when trying to prepare for a final exam or keep up in the next Math class . You mastered all the skills, but which one do you use in a specific problem? This written to help you to re-organize your knowledge into a more usable form. When you are faced with a problem that begins, “Solve for x.” What should you do? As you will see, there are at least 9 different situations where you have instruction. This guide will give you the key questions to ask yourself in order what procedure to use. The main steps that are involved are included. The asked in the ORDER that you should ask them. Each is referenced with a section (or if only part of the section is involved, the specific page or problem number To use this guide effectively, you should first read through the guide. Each reference to a section should be examined carefully. Can you make up a problem one being described? Would you know how to solve that problem without any clues? Look at the problem or section referenced. Is it like yours? Can you work those so, go on to the next topic. If not, highlight that line with a marker for Perhaps you should put an example problem on a 3 by 5 card (include the page for practice later . Now read the section again carefully. Work the examples and few similar problems from the exercises (odd ones so you can check the answers) practice. When you finish a whole type (i.e. Solve for x) mix your 3 by 5 cards them like a test. Simply verify that you know how to start the problem. Any that will direct you back to the sections where you need further study. If you need further help, consider asking for a tutoring appointment in the Math Lab. When you know what SPECIFIC topics present a problem for you, you can make tutoring session much more effective and be of help so your tutor can know what you need. See the Math Lab Coordinator early in the semester to fill out an SOLVE FOR X Is there more than one letter? - Treat all letters EXCEPT the one you are solving for as if they were Is there an x3or higher power of x? - The only way we could work this would be to gather all terms on one of the equation and then factor. - Use the Zero Product Principle to set each factor equal to zero, and solve. There might be as many solutions as the highest power of x. Is x2 the highest power of x? Use any of the following: 1) Try factoring, it sometimes works 2) Complete the square. WARNING: If a perfect square equals a negative number, quit. There is no real solution. 3) Put the equation in standard form and apply the Quadratic Formula. Rational inequality, boundary points. Is there a variable in the numerator or denominator of a fraction ? Find boundary points and test a point in each interval. To solve 2 linear equations in 2 unknowns there are 2 (equally good) methods. Each eliminates one variable in the first step. 2) Addition Method (You can observe the approximate solution by graphing both equations on graph. The solution is the coordinates of the point where the lines What can ‘go wrong?’ a) You lose BOTH variables in the first step and end up with nonsense 0 = 7. There is NO SOLUTION. (In this case the lines on the graph would be parallel, so they don’t meet at all.) We call this an b) You lose both variables in the first step and end up with truth like 0 = 0. The answer is that there are MANY SOLUTIONS. (In this case if you graphed the lines, one would be superimposed over the other.) Both equations describe the same line so any point on the line represents a To solve 2 equations in 2 unknowns with squares or higher powers of one or both Use addition or substitution – whichever allows you to eliminate a the first step. Then use substitution to find the other part of the To solve 3 or more equations in 3 or more Form a matrix and use elementary row operations. Note carefully to find inconsistent or dependent systems. COMPUTE OR EVALUATE Order of Operation 1) Work from the innermost grouping out. a) The numerator and denominator of a fraction are each groupings. b) An absolute value symbol is a grouping c) If the fraction is a complex, find SOME part that can be simplified and start there. 1) Write the expression in terms of the logs of single numbers. 2) Write each number in scientific notation using base 10 for ordinary numbers. Use natural logs if the problem involves power of e. 3) Look up the log of each number using the table of logs in the book (Mantissa) and write the log of each power of 10 by inspection 4) Simplify the expression into one with a positive mantissa and a 5) Write the answer in scientific notation using the body of the table A Logarithmic Expression Use the definition of logarithm to write in exponential form , and then the missing number. To graph ANY equation involving x and y: 1) Make a table for x and y. 2) Pick at least 5 values, some negative, for x. (Occasionally, it may convenient to pick some values for y.) 3) Using the formula given to you, complete the table. (Substitute each into the formula then compute the remaining value.) It is particularly to substitute 0 for x to find the y-intercept (s) and 0 for y to find 4) Plot the points from your table on the graph. 5) Connect the points smoothly moving from left to right To graph an equation like x = 4 (or any number.) All x values are 4; pick anything at all for y. The result will be a vertical line. To graph an equation like y = 7 (or any number.) All y values are 7, pick anything at all for x. The result will be a horizontal line. 2) Pick any point well away from the dotted edge. (If the origin qualifies, it is an 3) Substitute the coordinates of your point into the inequality. a) If the test point makes the inequality true, shade in that side of b) If the test point does NOT make the inequality true, shade in the 4) If the inequality allows =, (either ≥) fill in the edge of the graph To graph a system of inequalities: 1) Graph the first inequality as above. 2) Using a different color, graph the second inequality on the same 3) The answer is the region that is shaded with BOTH colors. The x-intercept of (a line) or curve is where it crosses the x-axis. To find its value, substitute 0 for y and then solve for x. The y-intercept is where the line or curve crosses the y-axis. To find its value, substitute 0 for x and then solve for y. If an equation can be put into the form y = mx + b, then it is a straight line. If an equation involves the second power of x or y or both, it may be a conic To find the distance between two points, ( x1 ,y1) and ( x2 , y2 ) The Pythagorean Theorem gives us this formula: The midpoint of the line segment between two points, ( x1 ,y1) and ( x2 , y2 The slope of a line can be determined in two 1) If you know the equation of the line, solve it for y. The slope is coefficient of x. 2) If you know the coordinates of two points, ( x1 ,y1) and ( x2 , y2 ) use the formula: Parallel lines have the same slope. Perpendicular lines have slopes with product –1 Fractions: If there are no variables, see To add or subtract: a) Find the Lowest Common Denominator. b) Change each fraction to an equivalent fraction by multiplying numerator and denominator by the same value. c) Add the numerators and use the common denominator. If there is a “—“ in front of a fraction be sure to distribute it to EVERY TERM in To multiply, factor numerators and denominators reducing where possible. Leave the answer in factored form unless it is part of a larger problem. (i.e. must be added to other terms.) To divide, FIRST invert the divisor, and then proceed as in multiplication. If there is a fraction within a numerator or a) Multiply numerator and denominator of the largest fraction by the LCD for all fractions b) Treat numerator and denominator as a grouping and simplify, then divide as indicated by the larger fraction. Remember, no denominator of any fraction may ever Always reduce final answers where possible by dividing common factors from numerator and denominator. Radicals: No negative under even index radical Is the expression under the radical a perfect square? cube? Simplify. Is there a factor of the expression under the radical that is a perfect square, cube etc? Factor it out and simplify. Remember, the radical always has a value, so if the expression is a variable, when it is negative the value RADICAL is its opposite. Is there a fraction under the radical? Simplify the expression into a single fraction and separate into two separate radicals. Is there a product or quotient of radicals? Perform the operations. Are two radicals in a sum alike (same index and Add using the coefficients of the radicals. Is there a radical in a denominator? a) If it is a single radical, multiply numerator and denominator by that b) If there is a sum or two terms where one or both are radicals the numerator and denominator by the CONJUGATE of the 1) FIRST, review the rules of exponents. 2) You may apply any appropriate rule to the expression, but the strategies may be useful: a) Are there powers of other expressions? Use the Power (of a Product) rule to remove parentheses. b) Are there powers of exponential expressions? Use Power (of a Power) rule where appropriate. c) Are there like bases in numerator or denominator? Use the product rule to simplify (add exponents.) d) Are there like bases in both numerator and denominator? Divide (by e) Are there negative exponents? Use the negative exponent rule to write the reciprocal. f) Write as a single fraction. g) Are you finished? Each exponent should apply to a single base. Each base should appear only once. There should be no negative exponents. Powers of numbers should be calculated. The fraction should be in lowest First, review the properties of Logarithms. Apply the properties – one at a time – until the goal is achieved. a) We define that i 2=−1 b) Complex numbers are written as a + bi, where a and b are real c) To remove a complex number from the denominator of an expression, multiply by its conjugate. d) For power of 1, substitute (-1) for i squared as many times as substitute 1 for i to the fourth. Place the decimal point after the first non-zero digit and multiply by appropriate power of 10. Simplify by using exponent rules on powers of To factor a number means to write it as a product of primes (numbers that cannot factored further.) Begin with any product and then break each number down until can be factored further. To factor a polynomial: 1) Is there a factor common to all terms? Factor out the greatest common 2) Are there 4 terms? Try factoring by grouping. 3) Is there a common pattern? a) Is this a difference of 2 squares ? b) Is this a perfect square trinomial? c) Is this the sum or difference of two cubes? When all else fails on a trinomial: 4) Perform a structures search. (This is an organized version of the from the text.) a) List all the possible ways to factor the first (squared) term. These the column headings. b) In each column, list all the possible arrangements of the factors for the last (constant) term. (These form the rows.) c) Test each entry in your table using FOIL to see if this makes the middle term possible. (If there are no candidates, report that it DOES NOT FACTOR.) d) If you have a candidate, insert signs to try to match original. i. If the last sign (constant) is negative, the signs are different. ii. If the last sign is positive, the two signs are alike, Use the sign the middle term. iii. If none of the above works, go on searching for new candidates. iv. If you exhaust the list and none work, report that it DOES NOT e) Check your solution. Check to be sure that none of the factors can be factored further “How to Solve Word Problems in Algebra” By Mildred Johnson is an excellent and inexpensive resource. It is available in the bookstore. 1) Read through the problem to determine type. 2) Draw a picture, if possible. 3) Write “Let x be …” 4) Pick out the basic unknown and finish the above sentence. 5) Write as many other quantities as possible in terms of x and label 6) Is there STILL another unknown? If so, write, “Let y be …” and the sentence. Write all other quantities in terms of ‘x and y’. You may one or more of the formulas below to complete this. Note: Tables are useful in many of these problems. Make one like the the text where appropriate. 7) Write any formula(s) that apply to this type of problem. a) d = rt (distance, time and speed) b) In wind or stream, when moving with the current, the speed is the sum of the speed of the craft and the current. c) i = Pr (interest for 1 year) d) Concentration of a solution (% target) (amount mixture) = amount target ingredient. e) (cost per item) (number of items) = value f) (denomination of a bill) (# of bills) = value g) consecutive numbers x, x + 1, x + 2 , etc. h) consecutive ODD or EVEN numbers (The value of the first determines which) n, n + 2, n + 4 etc. i) In age problems, when they say “in 5 years,” write each age + 5 j) Work rate problems convert the time to do a job into the work done per time period by taking the reciprocal. THESE quantities can be added or subtracted. k) Geometric formulas are found on the back cover of the text. Ask your instructor which you are responsible for knowing. l) Fulcrum: Use weight x distance for each force. m) Cost Analysis n) Direct and Inverse Variation. 8) Use the formula or the words from the problem to write an equation. 9) Solve the equation for x (or x and y.) 10) REREAD the question. Write all the quantities from the original using the value for x as a key. 11) Answer the question asked. 12) Check the answer with the problem’s original words. Discard any answers that don’t fit. Functions and composites Sec 2.1, 2.2, Inverse of a function The Binomial Theorem Intersection and union of intervals MATH 060 ONLINE SYLLABUS COMMUNICATING WITH THE INSTRUCTOR: The best way to communicate with me is by email. Another great way to with me live is by chat room . If you would like me to host a chat room, please and let me know. Lastly, you can call and leave a message on my office phone. This class will meet on campus for an orientation on Monday February 9, 2009 in 305 at 7:30 PM. You are required to attend the orientation if you haven’t completed the online orientation by Saturday , February 7 at 11:59 PM. The class will also meet 14, April 4, May 2, and May 23 for exams in CCC 401 from 8:30 AM – 12:00 PM and final exam will be given on May 30 in CCC 401 from 8:30 AM – 12:00 PM. The online homework assignments will have due dates. The suggested due dates be displayed online in course compass, but will be considered late if not done on campus exam. The homework is not timed, so you may redo them until you get a 100%. No work is accepted after the due date has passed. The homework will be made available after the suggested due dates so that you can always go and redo/ review the problems before the exam. Every week we don’t have an exam will have either an online quiz or online test. dates that these are available as well as due dates are indicated in the quizzes and tests will be timed. You will have multiple attempts on these. Only highest score will count toward your grade. TEXTBOOK HOMEWORK : The textbook homework assignments will be posted online in course compass on the announcements page. The homework must be neatly done following the sample homework guidelines to receive full credit. You will have four homework Each assignment is due on the day of the exam. NO LATE HOMEWORK WILL BE ON CAMPUS EXAMS: The exam days are given above and online in the schedule. If you miss an exam, I replace your missed exam with the final grade. If you miss two tests, you will dropped. If you know in advance that you will not be able to attend a scheduled you may reschedule one test provided you contact me 1 week before the test is Once you are enrolled in the online section with admissions and records you will to complete the online orientation or attend the on campus orientation. Once you complete the online orientation you will receive the course ID. Then you will be register for the course online at course compass. Before you register online, you have everything you need including a student access code (comes with the can be purchased online), CourseCompass Course ID provided by your instructor, a valid email address. • You will get the student access code when you purchase the textbook on campus or online • The CourseCompass Course ID is provided once you successfully complete the • You will need a valid email address (if you don’t have one, get one at hotmail Once you have enrolled in the course and are able to login to course compass, you MUST run the installation wizard and load the plug-ins. *** If you do not get registered with course compass by Friday, February 20, you will be dropped from the class. *** If you are inactive in course compass for more than a week, you will be dropped. Inactive means that you are not completing the assignments, not participating in discussion threads, and not participating in chat rooms. IS ONLINE MATH FOR YOU?? There are obvious benefits for taking a Math class online, but there can be disadvantages. There is no “real” teacher. Your “real” teacher is being replaced online video clips. This means that you will need to be extremely self-motivated online. You will also need to pace yourself and make sure you stay up with the To do well in this class you should have about 24 hours a week set aside to this class. If you feel that you are someone who needs more structure from the traditional professor, you should take a regular Math class. "I ordered the Algebra Buster late one night when my daughter was having problems in her honors algebra class. After we ordered your software she was able to see step by step how to solve the problems. Algebra Buster definitely saved the day."
http://www.algebra-online.com/alternative-math/tutorials-3/lesson-plan-for-completing-the-square.html
13
69
A gyroscope is a device used primarily for navigation and measurement of angular velocity 1) 2) 3). Gyroscopes are available that can measure rotational velocity in 1, 2, or 3 directions. 3-axis gyroscopes are often implemented with a 3-axis accelerometer to provide a full 6 degree-of-freedom (DoF) motion tracking system. Gyroscopes have evolved from mechanical-inertial spinning devices consisting of rotors, axles, and gimbals to various incarnations of electronic and optical devices. Each exploits some physical property of the system allowing it to detect rotational velocity about some axis. There are three basic types of gyroscope: - Rotary (classical) gyroscopes - Vibrating Structure Gyroscope - Optical Gyroscopes The classic gyroscope exploits the law of conservation of angular momentum which, simply stated, says that the total angular momentum of a system is constant in both magnitude and direction if the resultant external torque acting upon the system is zero4). These gyroscopes typically consist of a spinning disk or mass on an axle, which is mounted on a series of gimbals. Each gimbal offers the spinning disk an additional degree of rotational freedom. The gimbals allow the rotor to spin without applying any net external torque on the gyroscope. Thus as long as the gyroscope is spinning, it will maintain a constant orientation. When external torques or rotations about a given axis are present in these devices, orientation can be maintained and measurement of angular velocity can be measured due to the phenomenon of precession. Precession occurs when an object spinning about some axis (the spin axis) has an external torque applied in a direction perpendicular to the spin axis (the input axis). In a rotational system when net external torques are present, the angular momentum vector (which is along the spin axis) will move in the direction of the applied torque vector. As a result of the torque, the spin axis rotates about an axis that is perpendicular to both the input axis and spin axis (called the output axis). This rotation about the output axis is then sensed and fed back to the input axis where a motor or similar device applies torque in the opposite direction, cancelling the precession of the gyroscope and maintaining its orientation. This cancellation can also be accomplished with two gyroscopes oriented at right angles to one another. To measure rotation rate, counteracting torque is pulsed at regular time intervals. Each pulse represents a fixed angular rotation δθ, and the pulse count in a fixed time interval t will be proportional to the net angle change θ over that time period – thus, the applied counteracting torque is proportional to the rotation rate to be measured3). Today rotary gyroscopes are mainly used in stabilization applications. The presence of moving parts (gimbals, rotors) means that these gyroscopes can wear out or jam. A number of bearing types have been developed to minimize the wear and chance for jamming in these gyroscopes 5) 6). Another consequence of moving parts is that it limits how small these gyroscopes can be. Thus rotary gyroscopes are mostly used today in harsh military and naval environments which are subject to shock and intense vibration, and where physical size is not critical. These units are therefore not readily commercially available. Vibrating structure gyroscopes are MEMS (Micro-machined Electro-Mechanical Systems) devices that are easily available commercially, affordable, and very small in size. Fundamental to an understanding of the operation of an vibrating structure gyroscope is an understanding of the Coriolis force. In a rotating system, every point rotates with the same rotational velocity. As one approaches the axis of rotation of the system, the rotational velocity remains the same, but the speed in the direction perpendicular to the axis of rotation decreases. Thus, in order to travel in a straight line towards or away from the axis of rotation while on a rotating system, lateral speed must be either increased or decreased in order to maintain the same relative angular position (longitude) on the body. The act of slowing down or speeding up is acceleration, and the Coriolis force is this acceleration times the mass of the object whose longitude is to be maintained. The Coriolis force is proportional to both the angular velocity of the rotating object and the velocity of the object moving towards or away from the axis of rotation. Vibrating structure gyroscopes contain a micro-machined mass which is connected to an outer housing by a set of springs. This outer housing is connected to the fixed circuit board by a second set of orthogonal springs. The mass is continuously driven sinusoidally along the first set of springs. Any rotation of the system will induce Coriolis acceleration in the mass, pushing it in the direction of the second set of springs. As the mass is driven away from the axis of rotation, the mass will be pushed perpendicularly in one direction, and as it is driven back toward the axis of rotation, it will be pushed in the opposite direction, due to the Coriolis force acting on the mass. The Coriolis force is detected by capacitive sense fingers that are along the mass housing and the rigid structure. As the mass is pushed by the Coriolis force, a differential capacitance will be detected as the sensing fingers are brought closer together. When the mass is pushed in the opposite direction, different sets of sense fingers are brought closer together; thus the sensor can detect both the magnitude and direction of the angular velocity of the system 7) . Optical gyroscopes were developed soon after the discovery of laser technology. The appeal of this type of gyroscope is that they contain no moving parts, and hence are not susceptible to mechanical wear or drifting. Optical gyroscopes differ from other types in that they do not rely on conservation of angular momentum in order to operate. Instead, their functionality depends only on the constancy of the speed of light. Optical gyroscopes operate under the principle of the Sagnac effect. It easiest to understand this principle in the general case of a circle. A light source is positioned on a circle, emitting two beams of light in either direction. If the source stays stationary, then both beams of light require an equal amount of time to traverse the circle and arrive back at the source. However, if the source is rotating along the circle, then it takes more time for the beam in front of the source to complete its path. This principle can in fact be generalized to any loop, regardless of shape. In particular, we can measure the effect using a ring interferometry setup. Here, a laser beam is first split by a half silvered mirror. Then the two beams traverse identical paths but opposite directions around a loop consisting of either flat mirrors and air-filled straight tubes or a long fibre-optic cable. These two beams then recombine at a detector. When the system is rotating, one of the beams must travel a greater distance than the opposite traveling beam to make it to the detector. This difference in path length (or Doppler shift) is detected as a phase shift by interferometry. This phase shift is proportional to the angular velocity of the system5). Often optical gyroscope units consist of 3 mutually orthogonal gyroscopes for rotation sensing about all three orthogonal rotation axes. They are also typically implemented with 3-axis accelerometers thus providing full motion sensing in 6 DoF. Like rotor gyroscopes, optical gyroscopes are limited in how physically small they can get, due to the extensive amount of fibre-optic cable needed and presence of optical equipment. Thus these gyroscopes are often used in naval and aviation applications, and where physical size is not an issue. Therefore optical gyroscopes are typically not readily available commercially8). A gyroscope sensor has the following basic specifications: - Measurement range - Number of sensing axes - Working temperature range - Shock survivability - Angular Random Walk (ARW) - Bias Drift - Bias Instability Measurement range – This parameter specifies the maximum angular speed with which the sensor can measure, and is typically in degrees per second (˚/sec). Number of sensing axes – Gyroscopes are available that measure angular rotation in one, two, or three axes. Multi-axis sensing gyros have multiple single-axis gyros oriented orthogonal to one another. Vibrating structure gyroscopes are usually single-axis (yaw) gyros or dual-axis gyros, and rotary and optical gyroscope systems typically measure rotation in three axes. Nonlinearity – Gyroscopes output a voltage proportional to the sensed angular rate. Nonlinearity is a measure of how close to linear the outputted voltage is proportional to the actual angular rate. Not considering the nonlinearity of a gyro can result in some error in measurement. Nonlinearity is measured as a percentage error from a linear fit over the full-scale range, or an error in parts per million (ppm). Working temperature range – Most electronics only work in some range of temperatures. Operating temperatures for gyroscopes are quite large; their operating temperatures range from roughly -40˚C to anywhere between 70 and 200˚C and tend to be quite linear with temperature. Many gyroscopes are available with an onboard temperature sensor, so one does not need to worry about temperature related calibrations issues. Shock Survivability – In systems where both linear acceleration and angular rotation rate are measured, it is important to know how much force the gyroscope can withstand before failing. Fortunately gyroscopes are very robust, and can withstand a very large shock (over a very short duration) without breaking. This is typically measured in g’s (1g = earth’s acceleration due to gravity), and occasionally the time with which the maximum g-force can be applied before the unit fails is also given. Bandwidth – The bandwidth of a gyroscope typically measures how many measurements can be made per second. Thus the gyroscope bandwidth is usually quoted in Hz. Angular Random Walk (ARW) - This is a measure of gyro noise and has units of deg/hour1/2 or deg/sec1/2. It can be thought of as the variation (or standard deviation), due to noise, of the result of integrating the output of a stationary gyro over time. So, for example, consider a gyro with an ARW of 1°/sec1/2 being integrated many times to derive an angular position measurement: For a stationary gyro, the ideal result - and also the average result - will be zero. But the longer the integration time, the greater will be the spread of the results away from the ideal zero. Being proportional to the square root of the integration time, this spread would be 1° after 1 second and 10° after 100 seconds. Bias - The bias, or bias error, of a rate gyro is the signal output from the gyro when it is NOT experiencing any rotation. Even the most perfect gyros in the world have error sources and bias is one of these errors. Bias can be expressed as a voltage or a percentage of full scale output, but essentially it represents a rotational velocity (in degrees per second). Again, in a perfect world, one could make allowance for a fixed bias error. Unfortunately bias error tends to vary, both with temperature and over time. The bias error of a gyro is due to a number of components: - calibration errors - switch-on to switch-on - bias drift - effects of shock (g level) Individual measurements of bias are also affected by noise, which is why a meaningful bias measurement is always an averaged series of measurements. Bias Drift - This refers specifically to the variation of the bias over time, assuming all other factors remain constant. Basically this is a warm-up effect, caused by the self heating of the gyro and its associated mechanical and electrical components. This effect would be expected to be more prevalent over the first few seconds after switch-on and to be almost non-existent after (say) five minutes. Bias Instability - Bias Instability is a fundamental measure of the 'goodness' of a gyro. It is defined as the minimum point on the Allan Variance curve, usually measured in °/hr. It represents the best bias stability that could be achieved for a given gyro, assuming that bias averaging takes place at the interval defined at the Allan Variance minimum 9). Analog Devices ADXRS610 Description: ±300 degrees per second Single Chip Yaw Rate Gyro with Signal Conditioning Notes: Nonlinearity: 0.1% of Full-Scale Range Working Temperature Range: -40°C - 105°C Shock Survivability: 2000g Bandwidth: Adjustable (0.01 - 2500 Hz) Variants: ADXRS612 (±250 degrees per second); ADXRS614 (±50 degrees per second) Analog Devices EVAL-ADXRS610Z Description: ±500/110 degrees per second dual-axis gyroscope Notes: Two separate outputs per axis for standard and high sensitivity: X-/Y-Out Pins: 500°/s full scale range 2.0mV/°/s sensitivity X/Y4.5Out Pins: 110°/s full scale range 9.1mV/°/s sensitivity Sparkfun SEN-08189 6 DoF Inertial Measuring Unit Description: Bluetooth Wireless Inertial Measurement Unit consisting of 3 ADXRS150 (±150°/s max rate) gyroscopes and a Freescale MMA7260Q 3-axis accelerometer Datasheet: 6 DoF Measurement Unit Freescale MMA7260Q 3-Axis Accelerometer Notes: ADXRS150 Gyroscope Specs: Nonlinearity: 0.1% of Full-Scale Range Working Temperature Range: -40°C - 85°C Shock Survivability: 2000g Bandwidth: Adjustable (Typical Bandwidth: 40Hz)
http://www.sensorwiki.org/doku.php/sensors/gyroscope
13
69
Using their knowledge on exponential growth, students will forecast the population of their assigned community by the year 2013 and identify the possible problems encountered by the community because of population growth. The students are divided into groups composed of five members, with each taking on different roles as demographer, mathematician, statistician and multimedia designer. They have to visit www.forecastingproject.blogspot.com * the blog site of the teacher for the details of the roles. Students will have to create a blog site also to be used as a collaborative tool and a communication tool as they go through the process of making the project. The product is a multimedia presentation on the result of the research they have to conduct and upload it in the blog site or the wiki site that they will create. (Refer to Teacher’s Guide (doc). An assessment timeline is use to monitor and to evaluate the continuous progression of their work. Curriculum Framing Questions Motivational activity regarding social issues, problems and concerns. Ask the class a question whether there is a possibility of predicting our future? Introduce the essential question “How do we know what lies ahead of us?” Draw students in understanding the relevance of population and how population makes sense in the overall perspective of humanity. A graphic organizer (doc) is used in the brainstorming session on their knowledge of population, question and answer activity and KWL activity. The following unit questions will be given to the students in this session: 1. How can the increase in population affect the health condition of people? 2. How can population growth affect the life’s condition of people? Second Session (2 hours) Presentation and discussion of exponential function. After the presentation, discussion and checking of the interactive site, students need to answer the following questions: 1. How can population be computed? 2. How is the rate of increase or decrease of exponential growth or decay determined? 3. What is exponential growth/function? 4. What are the properties of exponential functions? Students will Refer to powering and exponential growth (doc) reading for in depth understanding of the topic. Visit an interactive site (http://www.analyzemath.com/expfunction/expfunction.html *) that will help the students understand exponential function and let them answer the problems posted in the site by group. Third Session (2 hours) The students will work on individually for the different exercises and seatwork on the computation and problem solving. Answer Questions (doc) covering the reading. For the assignment refer to Applying Mathematics (doc) activity sheet. The students will then be given samples on how to graph exponential function, they will be ask to visit an interactive site (http://www.analyzemath.com/Graphing/GraphExponentialFunction.html *) for additional information and to practice using the different samples in the site. The students will answer the following questions: 1. How is the trend of a given exponential function determined? 2. How is the zero of an exponential function using the laws of exponents determined? 3. How can the graph of exponential functions be illustrated? Fourth Session (2 hours) Introduce the webquest activity; discuss the task the students should do, the role they are going to play and the output they are going to produce. Five members should comprise the group with the following role to play such as demographer, data analyst/researcher, mathematician, statistician and multimedia designer. Students need to visit the blog site (www.forecastingproject.blogspot.com *) of the teacher for the detailed instruction. This site will be the point of interaction between the teacher and students, between students and students as well. Each group will decide the role for each member taking into account the capability of the member to work with the respective role. Come up with a graphic organizer on what area they will focus on after thorough discussion with their group. Work on with their project plan. In this web quest you will investigate population growth on a particular area. Working in groups you will study the population growth of the different barangays in Davao City. Identify the trend in the population increase every year until 2013. Relate the population growth data with the data on different diseases encountered by barangay every year from 2003 to 2013. Students will come up with a projective profile of the population of the different barangays by 2013. Come up with hypothetical statement relative to population growth and the health problems encountered by barangay assigned to each group. Each group will come up with a tabular presentation of data, computation of the increase of population integrating the concept of exponential function, formulate hypotheses based on the information gathered and design a power point presentation to show the result of the investigation. The whole class will then process the outputs of the different groups by comparing and contrasting the result of each group. You will be assigned to a team of five students. Decide on what role you are going to take. Gather data about the population of a specified barangay assigned to your group. You will have to contact the local government unit which is the barangay and find out where you can get pertinent data regarding the population for five years. The data should be from 2003 – 2008. Make a tabular representation of the data gathered. In your presentation see to it that you will have a data showing the age bracket such as children, adolescence, adult. Prepare population forecast. Works closely with mathematician. Responsible for gathering data in relation to size, movement of people in an area that relates to human population such as health. Record the health problems encountered by the people in barangay assigned to your group following the age bracket such as children, adolescence and adult. Make a tabular presentation of the data gathered also from 2003 – 2008. Will do the basic statistical analysis of the data gathered. Make a graphical representation of the data and formulate hypotheses based on the information provided by the data. Will Study the trend of the population and presents a mathematical computation that will relate exponential function to population growth. Show the solution of the computation. Work closely with research analyst and demographers. Prepares the actual multimedia presentation integrating all the data gathered by the other members of the team. Consolidate all the works of other members of the team. There will be an orientation about the activity inside the classroom where you will be group as a team of five with each member has a specific role to play as outline above. 1. Identify person in the local government unit who is responsible in keeping the population record of a particular barangay. Be sure to get data from 2003-2008 and record it properly. Come up with a matrix form so that you will have an organize data, where you will include the break down of the population from children, adolescence and adult bracket. (2 days) http://projects.edtech.sandi.net/staffdev/tpss99/processguides/interviewing.html * Please refer to this website to guide you on how to interview a person in-charge of the information you want to gather. 2. From the data gathered prepare a population forecast up to year 2013 applying your knowledge in exponential function. 3. Identify person in the Barangay health center responsible for keeping the record on health problems focusing only on five major diseases per year affecting the specified age group. 4. Prepare a matrix table (doc) to organize your data gathered capturing pertinent data for your final output. Analyzing the information gathered The gathered data will then be forwarded to the statistician for the analysis of it, make a graphical presentation, and formulate hypotheses regarding the data. Analysis will focus on the rate of increase or decrease of barangay population taking into account the age group, the different health problems encountered in relation to increase of population. Transforming Information into a product Referring to the data gathered by research analyst and demographers, mathematicians will then compute and forecast the possible number of population in different age group, what part of the population will likely be affected by a particular health problem applying the knowledge on exponential function. After the data/research analyst, demographers, statistician and mathematician have done their work its time for the multimedia designer to plan for a power point presentation. He will be responsible for capturing in the presentation the overall concept, design and layout give appropriate explanation on the result submitted by the other members through brainstorming. Tips (ppt) and guide (doc) about the presentation is provided for you. Fifth Session (3 hours) Demographers start to visit the area and conduct survey together with the researcher/analyst. Interview officials and health personnel in the areas assigned to them. (Students can use their vacant hours or their activity period to do this). Use the template for the data. Demographers can use the blog site for communicating data to other members of the group in order for them to be updated and they can prepare also their assigned task. Sixth Session (2 hours) Discussion about the data gathered, tabulation, computation, analysis and interpretation of results. Students can use the google docs if two hours is not enough for them to finalize their presentation. Each member should work hand in hand with each other to come up with a comprehensive output. Seventh Session (2 hours) Prepare the multimedia presentation with conclusion, recommendation and reflection. Polishing of their output. Give students tips on how to make a good presentation. Refer to storyboard guide. Students will now be answering the essential question given in the first session and the unit questions as well. Showcasing of their project per group through a PowerPoint presentation (ppt). Creation of their blog site (see student sample blog site at http://edingski.edublogs.org/) where they can upload the result of their project. Upon completion of Webquest activity students will have a realization on the importance of connecting the topic in the classroom to real life issue and how it could be of help in coming up with ways in preparing for whatever happens in the future. It will help them realize that we can make a difference if we do plan ahead of time and if we develop an ability to forecast. Forecasting as defined by Dunn (1994) is a procedure for producing factual information about future states of society on the basis of prior information about policy problems. The result of their project can be presented to barangay and can be very useful for planning alternative solutions to issues being covered in the project. Results should be given to the local government in order to address issues pertaining to growth rate of population and the possible problems or factors that may be affected because of the population growth. Accommodations for Differentiated Instruction Special Needs Students Students with special needs have to be grouped with somebody who can help and assist him/her. Give the student a role that he/she can perform according to his/her ability. Don’t give complex task. Should be grouped with the native speakers to assist him/her, or identify somebody who is a good translator in order to help the student. Direct the student to a site with a language translator or he/she can use a specialty search engines such as Language Tools at http://www.itools.com/lang/ * Give complex task such as additional problems on exponential function for solving as part of exercises or assignment. A research report can be a challenging task for the gifted students. The research report should be evaluated using the rubric.
http://www.intel.ph/content/www/ph/en/education/k12/project-design/unit-plans/forecasting.html
13
111
| Tutorial On Centre of Mass - Centre of Mass Tutorial & Sample Questions: a.) Two particle system - Centre of mass of a body or a system of particles is the point at which the whole mass of the system or body is supposed to be concentrated and moves as if the whole external force is applied at that point. - The motion of centre of mass of a body represents the motion of the whole body. - Position of centre of mass : Two particles of masses m1and m2are separated by a distance 'd'. If x1and x2are the distances of their centre of mass from m1and m2, then b.) The centre of mass of heavier and lighter mass system lies nearer to the heavier mass . c.) Number of particles lying along x-axis. Particles of masses m1, m2, m3------- are at distances x1x2x3------ from the origin, the distance of centre of mass from the origin. d.) Number of particles lying in a plane Particles of masses m1, m2, m3, --------- are lying in xy plane at positions (x1,y1), (x2,y2), (x3,y3) -------, then the position co-ordinates of their centre of mass. e.) Particles distributed in space. If (x1y1z1), (x2y2z2) - are the position co-ordinates of particles of masses m1, m2- the position co-ordinates of their centre of mass are f.) In Vector notation. If r1, r2, r3. . . are the position vectors of particles of masses m1, m2, m3...... then the position vector of their centre of mass is g.) Relative to centre of mass : , where m1, m2.... mr have position vectors relative to centre of mass i.e. Algebraic sum of the moments of masses of a system about its centre of mass is always zero. a.) depends on shape of the body. - Position of centre of mass of a body. b.) depends on distribution of mass for a given shape of the body. c.) coincides with geometric centre of the body if the body is in uniform gravitational field. If ....... are the velocities of particles of masses - There may or may not be any mass at centre of mass. - Centre of mass may be within or outside the body. - For symmetrical bodies with uniform distribution of mass it coincides with geometric centre. - Velocity of centre of mass m1, m2, m3, ......, mn the velocity of their centre of mass. - i.e total momentum of the system is the product mass of the whole system and the velocity of the centre of mass. - LAW OF CONSERVATION OF LINEAR MOMENTUM : In the absence of net external force, the total linear momentum of a system remains constant. i.e. if = constant Effect of Internal Forces- The linear momentum of particles remains constant under the influence of internal forces. a.) The linear momentum is conserved in all types of collisions (elastic and inelastic) where u1and u2and v1and v2are the velocities of two particles with masses m1and m2 before and after the collision. b.) In the absence of external forces, the linear momenta of individual particles can change but the total linear momentum of the whole system remains constant. c.) The law of conservation of linear momentum is based on the Newton's laws of motion. This is the fundamental law of nature and there is no exception to it. d.) Examples of laws of conservation of linear momentum i.) Motion of a Rocket (mv)gases= - (MV)rocket ii.) Firing of a bullet from a gun iii.) Explosion of a shell fired from a cannon iv.) Two masses m1and m2, attached to the two ends of a spring, when stretched in opposite directions and released, then the linear momentum of the system is conserved. e.) This law is valid only for linear motion. f.) Rocket propulsion, motion of jet aeroplane and sailing of a boat all depend upon the law of conservation of momentum. - If two particles of masses m1and m2are moving with velocities 1and 2at right angles to each other, then the velocity of their centre of mass is given by If , ........ are the accelerations of particles of masses m1, m2, m3.......mnthen the acceleration of their centre of mass is - Acceleration of centre of mass. m1+ m2+ m3+ .....+mn= M total mass of the system, then m1 a.) the centre of mass of a system is at rest if the centre of mass is initially at rest. - Centre of mass can be accelerated only by a net external force. - Internal forces cannot accelerate the centre of mass or change the state of centre of mass. - In the absence of external forces, b.) if the centre of mass of a system is moving with constant velocity, it continues to move with the same velocity. a.) The acceleration of centre of mass before and immediately after explosion is acm= g downward. - For a ring the centre of mass is its centre where there is no mass. - For a circular disc the centre of mass is at its Geometric centre where there is mass. - For a triangular plane lamina, the centre of mass is the point of intersection of the medians of the triangle. - The centre of mass of an uniform square plate lies at the intersection of the diagonals. - Out of a uniform circular disc of radius R, if a circular sheet of r is removed; the Centre of mass of remaining part shits by a distance . d is the distance of the centre of the smaller part from the original disc. - Out of a uniform solid sphere of radius R, if a sphere of radius r is removed, the centre of mass of the remaining part, shifts by . d is the distance of the smaller sphere from the centre of the original sphere. - When shell in flight explodes b.) The centre of mass of all the fragments will continue to move along the same trajectory as long as all the fragments are still in space. c.) If all the fragments reach the ground simultaneously, the centre of mass will complete the original trajectory. d.) If some of the fragments reach the ground earlier than the other fragments, the acceleration of centre of mass changes and its tragectory will change. a.) If the man walks a distance L on the boat, the boat is displaced in the opposite direction relative to shore or water by a distance - When a person walks on a boat in still water, centre of mass of person, boat system is not displaced. (m = mass of man, M = mass of boat) b.) distance walked by the mass relative to shore or water is (L-x) a.) in the above case Vcm and a cm= 0 - Two masses starting from rest move under mutual force of attraction towards each other, they meet at their centre of mass. b.) If the two particles are m1and m2and their velocities are v1and v2, then m1v1= -m2v2 c.) If the two particles have accelerations a1and a2. d.) If s1and s2are the distances travelled before they meet i) The particles come closer before collision and after collision they either stick together or move away from each other. - The event or the process in which two bodies, either coming in contact with each other or due to mutual interaction at a distance apart, affect each others motion (velocity, momentum, energy or the direction of motion) is defined as a collision between those two bodies. In short, the mutual interaction between two bodies or particles is defined as a collision. ii) The particles need not come in contact with each other for a collision. iii) The law of conservation of linear momentum is necessarily conserved in all types of collisions whereas the law of conservation of mechanical energy is not (i) Elastic collision or perfect elastic collision (ii) Semi elastic collision (iii) Perfectly inelastic collision or plastic collision i) One dimensional collision: The collision, in which the particles move along the same straight line before and after the collision, is defined as one dimensional collision. ii) According to the law of conservation of kinetic energy iii) According to the law of conservation of momentum (i) The velocity of first body after collision - Newton's law of elastic collision - The relative velocity of two particles before collision is equal to the negative of relative velocity after collision i.e (v1- v2) = -(u1- u2) - Important formulae and features for one dimensional elastic collision. (ii) The velocity of second body after collision (iii) If the body with mass m2 is initially at rest, and u2= 0 then and iv) When a particle of mass m1moving with velocity u1collides with another particle with mass m2at rest and v) m1= m2 then v1= 0 and v2= u1. Under this condition the first particle come to rest and the second particle moves with the velocity of first particle before collision. In this state there occurs maximum transfer of energy. vi) If m1>> m2then v1= u1and v2= 2u1under this condition the velocity of first particle remains unchanged and velocity of second particle becomes double that of first. vii) If m1<< m2then v1= -u1and v2= under this condition the second particle remains at rest while the first particle moves with the same velocity in the opposite direction viii) When m1= m2= m but u20 then v1= u2 and v2= u1i.e the particles mutually exchange their velocities. ix) Exchange of energy is maximum when m1= m2. This fact is utilised in atomic reactor in slowing down the neutrons. To slow down the neutrons these are made to collide with nuclei of almost similar mass. For this hydrogen nuclei are most appropriate. x.) Target Particle at rest : If m2is at rest, before collision xi) If m2is at rest and kinetic energy of m1before collision with m2is E. The kinetic energy of m1and m2 after collision is xii) In the above case fraction of KE retained by m1is Fraction of KE transferred by m1to m2is i) The collision, in which the kinetic energy of the system decreases as a result of collision, is defined as inelastic collision - One dimensional inelastic collision: ii) According to law of conservation of momentum iii) According to law of conservation of energy Q = other forms of energy like heat energy, sound energy etc. Q0 iv) According to Newton's law of inelastic collision (v1- v2) = -e(u1- u2) e = Coefficient of restitution - Coefficient of Restitution (e) - iii) e is dimensionless and carries no limit. iv) Limits of e 0 < e < 1 v) For plastic bodies and for perfectly inelastic collision e = 0. e = 1 for perfect elastic collision. vi) The value of e depends upon the material of colliding bodies. - SEMI - ELASTIC COLLISIONS - The velocity of first body after collision and velocity of second body - Loss of energy in inelastic collision - In case of inelastic collision the body gets strained and its temperature changes - If a body falls from a height h and strikes the ground level with velocity and rebounds with velocity v up to a height h1then the coefficient of restitution is given by If the body rebounds again and again to heights h1, h2, h3.... then Thus the total time taken by the body in coming to rest - The total distance covered by the body for infinite number of collisions - Time taken by the body in falling through height h is Perfectly inelastic collision: - For a semi elastic collision 0 < e < 1 i) The collision, in which the two particles stick together after the collision, is defined as the perfectly inelastic collision. ii) For a perfectly inelastic collision e = 0 iii) According to law of conservation of momentum iv) Loss in Kinetic energy of system = (u1-u2)2. v) Loss of kinetic energy is maximum when the colliding particles have equal momentum in opposite directions. 38. In an explosion, linear momentum is conserved, but kinetic energy is not conserved. The kinetic energy of the system after explosion increases. The internal energy of the system is used for the above purpose. 39. If a stationary shell breaks into two fragments, they will move in opposite directions, with velocities in the inverse ratio of their masses. 40. In the above, the two fragments have the same magnitude of linear momentum. 41. In the above case, the Kinetic energy of the two fragments is inversely propotional to their masses. 42. If the two fragments have equal masses, the two fragments have equal speeds in opposite directions. 43. If a shell breaks into three fragments, the total momentum of two of the fragments must be equal and opposite to the momentum of the third fragment is 44. When a stationary shell explodes, its total momentum is zero, before or after explosion. Recoil of Gun 45. If a stationary gun fires a bullet horizontally, the total momentum of the gun + bullet is zero before and after firing. 46. If M and are the mass of the gun and velocity of recoil of the gun, m and is the mass of the bullet and velocity of the bullet, then M + m = 0 i.e. |MV| = |mv| or magnitude of momentum of the gun is equal to magnitude of momentum of the bullet. - The bullet has greater Kinetic energy than the gun.
http://www.goiit.com/posts/show/0/content-class-11-centre-of-mass-903645.htm
13
51
An Introduction to Python Lists Fredrik Lundh | August 2006 The list type is a container that holds a number of other objects, in a given order. The list type implements the sequence protocol, and also allows you to add and remove objects from the sequence. Creating Lists # To create a list, put a number of expressions in square brackets: L = L = [expression, ...] This construct is known as a “list display”. Python also supports computed lists, called “list comprehensions”. In its simplest form, a list comprehension has the following syntax: L = [expression for variable in sequence] where the expression is evaluated once, for every item in the sequence. The expressions can be anything; you can put all kinds of objects in lists, including other lists, and multiple references to a single object. You can also use the built-in list type object to create lists: L = list() # empty list L = list(sequence) L = list(expression for variable in sequence) The sequence can be any kind of sequence object or iterable, including tuples and generators. If you pass in another list, the list function makes a copy. Note that Python creates a single new list every time you execute the expression. No more, no less. And Python never creates a new list if you assign a list to a variable. A = B = # both names will point to the same list A = B = A # both names will point to the same list A = ; B = # independent lists For information on how to add items to a list once you’ve created it, see Modifying Lists below. Accessing Lists # Lists implement the standard sequence interface; len(L) returns the number of items in the list, L[i] returns the item at index i (the first item has index 0), and L[i:j] returns a new list, containing the objects between i and j. n = len(L) item = L[index] seq = L[start:stop] If you pass in a negative index, Python adds the length of the list to the index. L[-1] can be used to access the last item in a list. For normal indexing, if the resulting index is outside the list, Python raises an IndexError exception. Slices are treated as boundaries instead, and the result will simply contain all items between the boundaries. Lists also support slice steps: seq = L[start:stop:step] seq = L[::2] # get every other item, starting with the first seq = L[1::2] # get every other item, starting with the second Looping Over Lists # The for-in statement makes it easy to loop over the items in a list: for item in L: print item If you need both the index and the item, use the enumerate function: for index, item in enumerate(L): print index, item If you need only the index, use range and len: for index in range(len(L)): print index The list object supports the iterator protocol. To explicitly create an iterator, use the built-in iter function: i = iter(L) item = i.next() # fetch first value item = i.next() # fetch second value Python provides various shortcuts for common list operations. For example, if a list contains numbers, the built-in sum function gives you the sum: v = sum(L) total = sum(L, subtotal) average = float(sum(L)) / len(L) If a list contains strings, you can combine the string into a single long string using the join string method: s = ''.join(L) Python also provides built-in operations to search for items, and to sort the list. These operations are described below. Modifying Lists # The list type also allows you to assign to individual items or slices, and to delete them. L[i] = obj L[i:j] = sequence Note that operations that modify the list will modify it in place. This means that if you have multiple variables that point to the same list, all variables will be updated at the same time. L = M = L # modify both lists L.append(obj) To create a separate list, you can use slicing or the list function to quickly create a copy: L = M = L[:] # create a copy # modify L only L.append(obj) You can also add items to an existing sequence. The append method adds a single item to the end of the list, the extend method adds items from another list (or any sequence) to the end, and insert inserts an item at a given index, and move the remaining items to the right. L.append(item) L.extend(sequence) L.insert(index, item) To insert items from another list or sequence at some other location, use slicing syntax: L[index:index] = sequence You can also remove items. The del statement can be used to remove an individual item, or to remove all items identified by a slice. The pop method removes an individual item and returns it, while remove searches for an item, and removes the first matching item from the list. del L[i] del L[i:j] item = L.pop() # last item item = L.pop(0) # first item item = L.pop(index) L.remove(item) The del statement and the pop method does pretty much the same thing, except that pop returns the removed item. Finally, the list type allows you to quickly reverse the order of the list. Reversing is fast, so temporarily reversing the list can often speed things up if you need to remove and insert a bunch of items at the beginning of the list: L.reverse() # append/insert/pop/delete at far end L.reverse() Note that the for-in statement maintains an internal index, which is incremented for each loop iteration. This means that if you modify the list you’re looping over, the indexes will get out of sync, and you may end up skipping over items, or process the same item multiple times. To work around this, you can loop over a copy of the list: for object in L[:]: if not condition: del L[index] Alternatively, you can use create a new list, and append to it: out = for object in L: if condition: out.append(object) A common pattern is to apply a function to every item in a list, and replace the item with the return value from the function: for index, object in enumerate(L): L[index] = function(object) out = for object in L: out.append(function(object)) The above can be better written using either the built-in map function, or as a list comprehension: out = map(function, L) out = [function(object) for object in L] For straightforward function calls, the map solution is more efficient, since the function object only needs to be fetched once. For other constructs (e.g. expressions or calls to object methods), you have to use a callback or a lambda to wrap the operation; in such cases, the list comprehension is more efficient, and usually also easier to read. Again, if you need both the item and the index, use enumerate: out = [function(index, object) for index, object in enumerate(L)] You can use the list type to implement simple data structures, such as stacks and queues. stack = stack.append(object) # push object = stack.pop() # pop from end queue = queue.append(object) # push object = queue.pop(0) # pop from beginning The list type isn’t optimized for this, so this works best when the structures are small (typically a few hundred items or smaller). For larger structures, you may need a specialized data structure, such as collections.deque. Another data structure for which a list works well in practice, as long as the structure is reasonably small, is an LRU (least-recently-used) container. The following statements moves an object to the end of the list: If you do the above every time you access an item in the LRU list, the least recently used items will move towards the beginning of the list. (for a simple cache implementation using this approach, see Caching.) Searching Lists # The in operator can be used to check if an item is present in the list: if value in L: print "list contains", value To get the index of the first matching item, use index: i = L.index(value) The index method does a linear search, and stops at the first matching item. If no matching item is found, it raises a ValueError exception. try: i = L.index(value) except ValueError: i = -1 # no match To get the index for all matching items, you can use a loop, and pass in a start index: i = -1 try: while 1: i = L.index(value, i+1) print "match at", i except ValueError: pass Moving the loop into a helper function makes it easier to use: def findall(L, value, start=0): # generator version i = start - 1 try: i = L.index(value, i+1) yield i except ValueError: pass for index in findall(L, value): print "match at", i To count matching items, use the count method: n = L.count(value) Note that count loops over the entire list, so if you just want to check if a value is present in the list, you should use in or, where applicable, index. To get the smallest or largest item in a list, use the built-in min and max functions: lo = min(L) hi = max(L) As with sort (see below), you can pass in a key function that is used to map the list items before they are compared: lo = min(L, key=int) hi = max(L, key=int) Sorting Lists # The sort method sorts a list in place. To get a sorted copy, use the built-in sorted function: out = sorted(L) An in-place sort is slightly more efficient, since Python does not have to allocate a new list to hold the result. By default, Python’s sort algorithm determines the order by comparing the objects in the list against each other. You can override this by passing in a callable object that takes two items, and returns -1 for “less than”, 0 for “equal”, and 1 for “greater than”. The built-in cmp function is often useful for this: def compare(a, b): return cmp(int(a), int(b)) # compare as integers L.sort(compare) def compare_columns(a, b): # sort on ascending index 0, descending index 2 return cmp(a, b) or cmp(b, a) out = sorted(L, compare_columns) Alternatively, you can specify a mapping between list items and search keys. If you do this, the sort algorithm will make one pass over the data to build a key array, and then sort both the key array and the list based on the keys. L.sort(key=int) out = sorted(L, key=int) If the transform is complex, or the list is large, this can be a lot faster than using a compare function, since the items only have to be transformed once. Python’s sort is stable; the order of items that compare equal will be preserved. Printing Lists # By default, the list type does a repr on all items, and adds brackets and commas as necessary. In other words, for built-in types, the printed list looks like the corresponding list display: print [1, 2, 3] # prints [1, 2, 3] To control formatting, use the string join method, combined with either map or a list comprehension or generator expression. print "".join(L) # if all items are strings print ", ".join(map(str, L)) print "|".join(str(v) for v in L if v > 0) To print a list of string fragments to a file, you can use writelines instead of write: sys.stdout.writelines(L) # if all items are strings Performance Notes # The list object consists of two internal parts; one object header, and one separately allocated array of object references. The latter is reallocated as necessary. The list has the following performance characteristics: - The list object stores pointers to objects, not the actual objects themselves. The size of a list in memory depends on the number of objects in the list, not the size of the objects. - The time needed to get or set an individual item is constant, no matter what the size of the list is (also known as “O(1)” behaviour). - The time needed to append an item to the list is “amortized constant”; whenever the list needs to allocate more memory, it allocates room for a few items more than it actually needs, to avoid having to reallocate on each call (this assumes that the memory allocator is fast; for huge lists, the allocation overhead may push the behaviour towards O(n*n)). - The time needed to insert an item depends on the size of the list, or more exactly, how many items that are to the right of the inserted item (O(n)). In other words, inserting items at the end is fast, but inserting items at the beginning can be relatively slow, if the list is large. - The time needed to remove an item is about the same as the time needed to insert an item at the same location; removing items at the end is fast, removing items at the beginning is slow. - The time needed to reverse a list is proportional to the list size (O(n)). - The time needed to sort a list varies; the worst case is O(n log n), but typical cases are often a lot better than that. Last Updated: November 2006
http://effbot.org/zone/python-list.htm
13
62
1. Introduction to Functions In everyday life, many quantities depend on one or more changing variables. For example: (a) Plant growth depends on sunlight and rainfall (b) Speed depends on distance travelled and time taken (c) Voltage depends on current and resistance (d) Test marks depend on attitude, listening in lectures and doing tutorials (among many other variables!!) A function is a rule that relates how one quantity depends on other quantities. A particular electrical circuit has a power source and an 8 ohms (Ω) resistor. The voltage in that circult is given by: V = 8I, V = voltage (in volts, V) I = current (in amperes, A) So if I = 4 amperes, then the voltage is V = 8 × 4 = 32 volts. If I increases, so does the voltage, V. If I decreases, so does the voltage, V. We say voltage is a function of current (when resistance is constant). We get only one value of V for each value of I. A bicycle covers a distance in 20 seconds. The speed of the bicycle is given by s = speed (in ms−1, or meters per second, m/s) d = distance (in meters, m) If the distance covered by the bike is 10 m, then the speed is `s = 0.05 × 10 = 0.5\ "m/s"`. If d increases, the speed goes up. If d decreases, the speed goes down. We say speed is a function of distance (when time is constant). We get only one value of s for each value of d. Definition of a Function We have 2 quantities (called "variables") and we observe there is a relationship between them. If we find that for every value of the first variable there is only one value of the second variable, then we say: "The second variable is a function of the first variable." The first variable is the independent variable (usually written as x), and the second variable is the dependent variable (usually written as y). The independent variable and the dependent variable are real numbers. (We'll learn about numbers which are not real later, in Complex Numbers.) We know the equation for the area, A, of a circle from primary school: A = πr2, where r is the radius of the circle This is a function as each value of the independent variable r gives us one value of the dependent variable A. We use x for the independent variable and y for the dependent variable for general cases. This is very common in math. Please realize these general quantities can represent millions of relationships between real quantities. In the equation `y = 3x + 1`, y is a function of x, since for each value of x, there is only one value of y. If we substitute `x = 5`, we get `y = 16` and no other value. The values of y we get depend on the values chosen for x. Therefore, x is the independent variable and y is the dependent variable. The force F required to accelerate an object of mass 5 kg by an acceleration of a ms-2 is given by: `F = 5a`. Here, F is a function of the acceleration, a. The dependent variable is F and the independent variable is a. We normally write functions as: `f(x)` and read this as "function f of x". We can use other letters for functions, like g(x) or y(x). When we are solving real problems, we use meaningful letters like P(t) for power at time t, F(t) for force at time t, h(x) for height of an object, x horizontal units from a fixed point. We often come across functions like: y = 2x2 + 5x + 3 in math. We can write this using function notation: y = f(x) = 2x2 + 5x + 3 Function notation is all about substitution. The value of this function f(x) when `x = 0` is written as `f(0)`. We calculate its value by substituting as follows: f(0) = 2(0)2 + 5(0) + 3 = 0 + 0 + 3 = 3 Function Notation: In General In general, the value of any function f(x) when x = a is written as f(a). If we have `f(x) = 4x + 10`, the value of `f(x)` for `x = 3` is written: `f(3) = 4 × 3 + 10 = 22` In other words, when `x = 3`, the value of the function f(x) is `22`. Mathematics is often confusing because of the way it is written. We write `5(10)` and it means `5 × 10= 50`. But if we write `a(10)`, this could mean, depending on the situation, "function a of `10`" (that is, the value of the function a when the independent variable is `10`) Or it could mean multiplication, as in: `a × 10 = 10a`. You have to be careful with this. Also, be careful when substituting letters or expressions into functions. See a discussion on this: Towards more meaningful math notation. This example involves some fixed constant, d. If `h(x) = dx^3+ 5x` then value of `h(x)` for `x = 10` is: `h(10) = d(10)^3+ 5(10)` `= 1000d + 50` We leave the d there because we don't know anything about its value. This example involves the value of a function when the independent variable contains a constant. If the height of an object at time t is given by h(t) = 10t2 − 2t, then a. The height at time `t = 4` is h(4) = 10(4)2 − 2(4) = 10 ×16 − 8 = 152 b. The height at time t = b is h(b) = 10b2 − 2b c. The height at time `t = 3b` is h(3b) = 10(3b)2 − 2(3b) = 10 × 9b2 − 6b = 90b2 − 6b d. The height at time `t = b + 1` is h(b + 1) = 10(b + 1)2 − 2(b + 1) = 10 × (b2 + 2b + 1) − 2b − 2 = 10b2 + 20b + 10 − 2b − 2 = 10b2 + 18b + 8 Evaluate the following functions: (1) Given `f(x) = 3x + 20`, find a. `f(-4)` b. `f(10)` (2) Given that the height of a particular object at time t is h(t) = 50t − 4.9t2, find a. `h(2)` b. `h(5)` (3) The voltage, V, in a particular circuit is a function of time t, and is given by: V(t) = 3t − 1.02t Find the voltage at time a. `t = 4` b. `t = c + 10` (4) If F(t) = 3t − t2 for t ≤ 2, find F(2) and F(3). Didn't find what you are looking for on this page? Try search: Online Algebra Solver This algebra solver can solve a wide range of math problems. (Please be patient while it loads.) Go to: Online algebra solver Ready for a break? Play a math game. (Well, not really a math game, but each game was made using math...) The IntMath Newsletter Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents! Short URL for this Page Save typing! You can use this URL to reach this page: Math Lessons on DVD Easy to understand math lessons on DVD. See samples before you commit. More info: Math videos
http://www.intmath.com/functions-and-graphs/1-introduction-to-functions.php
13
69
Volume vs Area The terms volume and area are often mentioned by many people of different intellect; they might be mathematicians, physicists, teachers, engineers, or just ordinary people. Volume and area are very much related to each other that sometimes some people bet confused about their usage. Volume can be simply defined as the space taken up by a mass in three dimensional (3-D). That particular mass can have any form: solid, liquid, gas or plasma. Volumes of simple objects having less complex shapes are easy to calculate using predefined arithmetic formulas. When it comes to finding out the volume of much more complex and irregular shapes, it is convenient to use integrals. In many cases, computing the volume involves three variables. For instance, volume of a cube is the multiplication of length, width and height. Therefore, the standard unit for the volume is cubic meters (m3). Additionally volumetric measurements can be expressed in liters (L), milliliters (ml) and pints. Apart from using formulas and integrals, the volume of solid objects with irregular shapes can be determined using the liquid displacement method. Area is the surface size of a two dimensional object. For solid objects such as cones, spheres, cylinders area means the surface area that covers the total volume of the object. The standard unit of area is the square meters (m2). Similarly, area can be measured in square centimeters (cm2), square millimeters (mm2), square feet (ft2) etc. In many cases, computing area requires two variables. For simple shapes such as triangles, circles and rectangles there are defined formulas to compute the area. Area of any polygon can be calculated using those formulas by dividing the polygon into simpler shapes. But calculating the surface areas of complex shapes involves multivariable calculus. What is the difference between Volume and Area? Volume describes the space occupied by a mass, while area describes the surface size. Calculation of volume of simple objects requires three variables; say for cube, it requires length, width and height. But, for computing the area of one side of the cube requires only two variables; length and width. Unless the surface area is the one that is discussed, area usually deals with 2-D objects, whereas volume considers 3-D objects. A basic difference is there with standard units for area and volume. Unit of area has an exponent of 2, while the unit of volume has an exponent of 3. Also, when it comes to computation of area and volume, volume calculations are much harder than that of area. Area vs Volume • Volume is the space occupied by a mass, while area is the size of the exposed surface. • Area often has the exponent 2 in its unit, whereas volume has exponent 3. • Generally, volume deals with 3-D objects, while area aims at 2-D objects. (exception being the surface areas of solid objects) • Volumes are difficult to compute than areas.
http://www.differencebetween.com/difference-between-volume-and-vs-area/
13
172
In geometry, the tangent line (or simply the tangent) to a plane curve at a given point is the straight line that "just touches" the curve at that point. Informally, it is a line through a pair of infinitely close points on the curve. More precisely, a straight line is said to be a tangent of a curve y = f(x) at a point x = c on the curve if the line passes through the point (c, f(c)) on the curve and has slope f'(c) where f' is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space. As it passes through the point where the tangent line and the curve meet, called the point of tangency, the tangent line is "going in the same direction" as the curve, and is thus the best straight-line approximation to the curve at that point. Similarly, the tangent plane to a surface at a given point is the plane that "just touches" the surface at that point. The concept of a tangent is one of the most fundamental notions in differential geometry and has been extensively generalized; see Tangent space. The first definition of a tangent was "a right line which touches a curve, but which when produced, does not cut it". This old definition prevents inflection points from having any tangent. It has been dismissed and the modern definitions are equivalent to those of Leibniz. Leibniz defined the tangent line as the line through a pair of infinitely close points on the curve. Tangent line to a curve The intuitive notion that a tangent line "touches" a curve can be made more explicit by considering the sequence of straight lines (secant lines) passing through two points, A and B, those that lie on the function curve. The tangent at A is the limit when point B approximates or tends to A. The existence and uniqueness of the tangent line depends on a certain type of mathematical smoothness, known as "differentiability." For example, if two circular arcs meet at a sharp point (a vertex) then there is no uniquely defined tangent at the vertex because the limit of the progression of secant lines depends on the direction in which "point B" approaches the vertex. At most points, the tangent touches the curve without crossing it (though it may, when continued, cross the curve at other places away from the point of tangent). A point where the tangent (at this point) crosses the curve is called an inflection point. Circles, parabolas, hyperbolas and ellipses do not have any inflection point, but more complicated curve do have, like the graph of a cubic function, which has exactly one inflection point. Conversely, it may happen that the curve lies entirely on one side of a straight line passing through a point on it, and yet this straight line is not a tangent line. This is the case, for example, for a line passing through the vertex of a triangle and not intersecting the triangle—where the tangent line does not exist for the reasons explained above. In convex geometry, such lines are called supporting lines. The geometric idea of the tangent line as the limit of secant lines serves as the motivation for analytical methods that are used to find tangent lines explicitly. The question of finding the tangent line to a graph, or the tangent line problem, was one of the central questions leading to the development of calculus in the 17th century. In the second book of his Geometry, René Descartes said of the problem of constructing the tangent to a curve, "And I dare say that this is not only the most useful and most general problem in geometry that I know, but even that I have ever desired to know". Suppose that a curve is given as the graph of a function, y = f(x). To find the tangent line at the point p = (a, f(a)), consider another nearby point q = (a + h, f(a + h)) on the curve. The slope of the secant line passing through p and q is equal to the difference quotient As the point q approaches p, which corresponds to making h smaller and smaller, the difference quotient should approach a certain limiting value k, which is the slope of the tangent line at the point p. If k is known, the equation of the tangent line can be found in the point-slope form: More rigorous description To make the preceding reasoning rigorous, one has to explain what is meant by the difference quotient approaching a certain limiting value k. The precise mathematical formulation was given by Cauchy in the 19th century and is based on the notion of limit. Suppose that the graph does not have a break or a sharp edge at p and it is neither plumb nor too wiggly near p. Then there is a unique value of k such that, as h approaches 0, the difference quotient gets closer and closer to k, and the distance between them becomes negligible compared with the size of h, if h is small enough. This leads to the definition of the slope of the tangent line to the graph as the limit of the difference quotients for the function f. This limit is the derivative of the function f at x = a, denoted f ′(a). Using derivatives, the equation of the tangent line can be stated as follows: Calculus provides rules for computing the derivatives of functions that are given by formulas, such as the power function, trigonometric functions, exponential function, logarithm, and their various combinations. Thus, equations of the tangents to graphs of all these functions, as well as many others, can be found by the methods of calculus. How the method can fail Calculus also demonstrates that there are functions and points on their graphs for which the limit determining the slope of the tangent line does not exist. For these points the function f is non-differentiable. There are two possible reasons for the method of finding the tangents based on the limits and derivatives to fail: either the geometric tangent exists, but it is a vertical line, which cannot be given in the point-slope form since it does not have a slope, or the graph exhibits one of three behaviors that precludes a geometric tangent. The graph y = x1/3 illustrates the first possibility: here the difference quotient at a = 0 is equal to h1/3/h = h−2/3, which becomes very large as h approaches 0. This curve has a tangent line at the origin that is vertical. The graph y = x2/3 illustrates another possibility: this graph has a cusp at the origin. This means that, when h approaches 0, the difference quotient at a = 0 approaches plus or minus infinity depending on the sign of x. Thus both branches of the curve are near to the half vertical line for which y=0, but none is near to the negative part of this line. Basically, there is no tangent at the origin in this case, but in some context one may consider this line as a tangent, and even, in algebraic geometry, as a double tangent. The graph y = |x| of the absolute value function consists of two straight lines with different slopes joined at the origin. As a point q approaches the origin from the right, the secant line always has slope 1. As a point q approaches the origin from the left, the secant line always has slope −1. Therefore, there is no unique tangent to the graph at the origin. Having two different (but finite) slopes is called a corner. Finally, since differentiability implies continuity, the contrapositive states discontinuity implies non-differentiability. Any such jump or point discontinuity will have no tangent line. This includes cases where one slope approaches positive infinity while the other approaches negative infinity, leading to an infinite jump discontinuity When the curve is given by y = f(x) then the slope of the tangent is so by the point–slope formula the equation of the tangent line at (X, Y) is where (x, y) are the coordinates of any point on the tangent line, and where the derivative is evaluated at . When the curve is given by y = f(x), the tangent line's equation can also be found by using polynomial division to divide by ; if the remainder is denoted by , then the equation of the tangent line is given by When the equation of the curve is given in the form f(x, y) = 0 then the value of the slope can be found by implicit differentiation, giving The equation of the tangent line at a point (X,Y) such that f(X,Y) = 0 is then This equation remains true if but (in this case the slope of the tangent is infinite). If the tangent line is not defined and the point (X,Y) is said singular. For algebraic curves, computations may be simplified somewhat by converting to homogeneous coordinates. Specifically, let the homogeneous equation of the curve be g(x, y, z) = 0 where g is a homogeneous function of degree n. Then, if (X, Y, Z) lies on the curve, Euler's theorem implies It follows that the homogeneous equation of the tangent line is The equation of the tangent line in Cartesian coordinates can be found by setting z=1 in this equation. To apply this to algebraic curves, write f(x, y) as where each ur is the sum of all terms of degree r. The homogeneous equation of the curve is then Applying the equation above and setting z=1 produces If the curve is given parametrically by then the slope of the tangent is giving the equation for the tangent line at as If , the tangent line is not defined. However, it may occur that the tangent line exists and may be computed from an implicit equation of the curve. Normal line to a curve The line perpendicular to the tangent line to a curve at the point of tangency is called the normal line to the curve at that point. The slopes of perpendicular lines have product −1, so if the equation of the curve is y = f(x) then slope of the normal line is and it follows that the equation of the normal line at (X, Y) is Similarly, if the equation of the curve has the form f(x, y) = 0 then the equation of the normal line is given by If the curve is given parametrically by then the equation of the normal line is Angle between curves The angle between two curves at a point where they intersect is defined as the angle between their tangent lines at that point. More specifically, two curves are said to be tangent at a point if they have the same tangent at a point, and orthogonal if their tangent lines are orthogonal. Multiple tangents at the origin The formulas above fail when the point is a singular point. In this case there may be two or more branches of the curve which pass through the point, each branch having its own tangent line. When the point is the origin, the equations of these lines can be found for algebraic curves by factoring the equation formed by eliminating all but the lowest degree terms from the original equation. Since any point can be made the origin by a change of variables, this gives a method for finding the tangent lines at any singular point. For example, the equation of the limaçon trisectrix shown to the right is Expanding this and eliminating all but terms of degree 2 gives which, when factored, becomes So these are the equations of the two tangent lines through the origin. Two circles of non-equal radius, both in the same plane, are said to be tangent to each other if they meet at only one point. Equivalently, two circles, with radii of ri and centers at (xi, yi), for i = 1, 2 are said to be tangent to each other if - Two circles are externally tangent if the distance between their centres is equal to the sum of their radii. - Two circles are internally tangent if the distance between their centres is equal to the difference between their radii. Surfaces and higher-dimensional manifolds The tangent plane to a surface at a given point p is defined in an analogous way to the tangent line in the case of curves. It is the best approximation of the surface by a plane at p, and can be obtained as the limiting position of the planes passing through 3 distinct points on the surface close to p as these points converge to p. More generally, there is a k-dimensional tangent space at each point of a k-dimensional manifold in the n-dimensional Euclidean space. - Newton's method - Normal (geometry) - Osculating circle - Osculating curve - Supporting line - Tangent cone - Tangential angle - Tangential component - Tangent lines to circles - Noah Webster, American Dictionary of the English Language (New York: S. Converse, 1828), vol.2, p.733, - Descartes, René (1954). The geometry of René Descartes. Courier Dover. p. 95. ISBN 0-486-60068-8. - R. E. Langer (October 1937). "Rene Descartes". American Mathematical Monthly (Mathematical Association of America) 44 (8): 495–512. doi:10.2307/2301226. JSTOR 2301226. - Edwards Art. 191 - Strickland-Constable, Charles, "A simple method for finding tangents to polynomial graphs", Mathematical Gazette, November 2005, 466-467. - Edwards Art. 192 - Edwards Art. 193 - Edwards Art. 196 - Edwards Art. 194 - Edwards Art. 195 - Edwards Art. 197 - Circles For Leaving Certificate Honours Mathematics by Thomas O’Sullivan 1997 - J. Edwards (1892). Differential Calculus. London: MacMillan and Co. pp. 143 ff. |Wikimedia Commons has media related to: Tangency| |Wikisource has the text of the 1921 Collier's Encyclopedia article Tangent.| - Hazewinkel, Michiel, ed. (2001), "Tangent line", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - Weisstein, Eric W., "Tangent Line", MathWorld. - Tangent to a circle With interactive animation - Tangent and first derivative — An interactive simulation - The Tangent Parabola by John H. Mathews
http://en.wikipedia.org/wiki/Tangent
13
60
Most computers today use two's complement representation for negative integer numbers. The UNIVAC® 1100 family, however, used one's complement. In this page we'll look at the differences between the different representations and the most eccentric aspect of one's complement: minus zero, a number dearly cherished by clever 1100 series programmers. When discussing binary numbers, we'll use octal notation, as was the practice on the 1100 series. Each digit between 0 and 7 represents three binary bits, and hence a 36-bit word is written as 12 octal digits. The most obvious way to represent positive and negative numbers in a binary computer is signed magnitude, a direct analogue to how numbers are written in decimal notation. A sign bit is dedicated to indicating whether the number is positive or negative, with the rest of the bits giving the magnitude of the number. On almost all computers the most significant bit is used for the sign bit, zero signifying positive and one negative. (There's no reason you couldn't use some other bit as the sign or have one mean positive, but that would complicate the electronics and render impossible several programming tricks we'll get into later on.) Let's consider, then, how we'd store the number 11, both positive and negative, on a 36-bit signed magnitude binary computer. The binary representation of 11 is 1011, or 013 in octal, so positive 11 becomes: Since we're using the most significant (235) bit of the word to denote the sign, with one denoting negative, it is evident that the 36-bit signed magnitude representation of −11 is: (Remember, octal digit 4 corresponds to bit pattern 100, so a leading 4 indicates the sign bit is set, informing us that the value in the rest of the word is negative.) Signed magnitude is straightforward, easy to understand since it parallels the notation we're used to, and easy to decode by hand while debugging. So naturally, you'd expect there to be an excellent engineering reason why it shouldn't be used, and indeed there is. The fly lands in your Pepsi glass at the point where you move on from storing numbers to doing arithmetic with them. Consider: when the computer adds two numbers, with signed magnitude it first has to look at the signs of both numbers, then decide, based upon the signs, whether to add them or subtract one from the other, and what sign the result will bear. This doesn't seem like such a difficult problem today, when hardware is, by the standards of the 1950's when the 1100 series was designed, free, but when you put yourself in the place of designers who knew that each logic gate cost several dollars and, in the vacuum tube era, took up substantial space and gave off more heat than an entire computer does today, the need to simplify was compelling. Wouldn't it be great if the computer's arithmetic unit never needed to know if a number was positive or negative? This turns out to be possible, and led to the wide adoption of other representations of negative numbers, which we'll now examine. (As hardware prices have plummeted, the relative advantages of various ways of doing arithmetic have become less significant. IEEE Std 754 floating point, used by virtually all contemporary computers, employs signed magnitude for negative numbers.) Designers of mechanical calculators confronted the problem of representing negative numbers decades before the first electronic computers. With only gears and levers at their disposal, simplicity was essential, and they developed an ingenious way to represent negative numbers called ten's complement. Suppose we have a four digit decimal calculator. The number 11 will then be represented as 0011. What if we want to put in −11? In ten's complement, if the number is negative we subtract its magnitude from the number one greater than our register size and enter the result. The largest number our four-digit calculator can handle is 9999; to get the ten's complement of −11, we compute 10000 − 11 = 9989. The point of all this is now we have a way to compute with positive and negative numbers without ever worrying about their signs. To see how it works, let's add 0011 and 9989, the ten's complement of −11. Crank, grind, crunch, and our calculator prints 0000 on the tape. Wait a minute, you exclaim, when I add 0011 and 9989, I get 10000! That's right, but remember we're using a four digit calculator, so the carry into the fifth digit just disappears, leaving the result of 0000. Since adding 0011 and the ten's complement of −11, 9989, yielded zero (as long as we forget about the carry, as the calculator does), we seem to have found a way that the calculator can work without worrying about the sign and, as a little experimentation will show, we have indeed. Nicer still, we can leave to the user whether the calculator is considered to work on positive numbers from 0 to 9999 or signed numbers in the range from −5000 to +4999; all it takes is a little “user interface”, a few more gears to convert numbers back and forth to ten's complement, and we're in business. The range of numbers looks a little odd, doesn't it? Let's see where that crept in. Taking the rules for forming the ten's complement, −1 becomes 9999, −2 9998, and so on, with the largest possible negative value being −5000, 5000. But since 5000 is the most negative number, we can't have positive numbers greater than 4999 because otherwise they'd overflow into the negative range. Irritating, but I suppose we'll learn to live with it. More serious is discovering you can't divide a negative ten's complement number by 10 by shifting it right one place, as we do so easily with positive numbers. Consider 11: if we want to divide 0011 by 10, we just shift it to the right one place yielding the quotient of 0001 (the remainder is discarded). If you want to make a calculator multiply, this is extremely nice since you can rig the gear wheels to shift left and right and do everything with addition. But it doesn't work for negative numbers. Consider −11, which we represent as 9989. If we shift that right one position (understanding that we shift in 9 at the left when the number was negative to start with, in order to preserve the ten's complement of the top digit), we end up with 9998, which is −2. Bzzzzzt…wrong, we should have gotten −1, or 9999. The culprit in the divide-by-shifting caper turns out to be the same asymmetry around zero which caused us to end up with a negative number with no positive counterpart. We've been discussing decimal computation so far. For binary computers there is a precise counterpart to ten's complement: two's complement. (The technique works in any number base; if you were computing in hexadecimal, you could use “sixteen's complement” for negative numbers.) Suppose that instead of a four digit decimal calculator we have a 12-bit binary computer. (I choose 12 bits since that's four octal digits). Taking the number 11 again, in binary that's 000000001011 or, in octal 0013. To form the two's complement for −11, we subtract the magnitude from binary 1000000000000 (octal 10000), which gives binary 111111110101, or octal 7765. Adding this back to 0013 yields 0000, confirming that the scheme works as well in base two as it did for ten. Now that we've made the transition to binary, the problem with shifting two's complement numbers is even more nettlesome. Dividing by a power of two is something you do all the time in software for a binary computer and, while it's OK for positive numbers, you can't do it if the dividend might be negative. A compiler generating code for a FORTRAN program, for example, generally doesn't know when compiling: J = I / 256 whether I might contain a negative value. Not knowing, it has no alternative but to generate a divide by 256 rather than a shift by 8 bits which, on computers like the 1100 series, is about ten times faster. Further, once in the binary world you start doing lots of logical operations on bits. But, you quickly discover, logical negation (changing all the ones into zeroes and vice versa) is not the same as calculating the two's complement for a negative number: the values will always differ by one. Since logical and arithmetic negation are both things you do all the time, that means you have to put in separate instructions for each—more expensive hardware, and the programmer has to be careful to choose the right one, depending on how the value in a word is being used…messy. It was these shortcomings, especially apparent in a binary architecture, which motivated the engineers designing the 1100 series to look beyond two's complement for an even better solution. In the the next section we'll see what they found. If their choice seems odd, it's because in the intervening years the computer industry has pretty much settled on two's complement notation for negative integers, deciding, as it were, that the shortcomings we've discussed above can be lived with, that the merits of other notations are outweighed by disadvantages of their own as least as serious as the drawbacks of two's complement. So, when you examine the instruction set of a present day microprocessor such as the Intel x86, you'll find separate instructions for negating logical values NOT) and integers ( NEG) and, in many programming primers, an explanation of what each does and when to use Since it appears most of the problems we encountered with ten's and two's complement are due to the asymmetry around zero, what if we eliminate it by making all the negative numbers one less? It turns out this is equivalent (for the decimal case) to subtracting each individual digit of the magnitude of the number from the number nine, yielding a nine's complement representation. Turning to the familiar case of 11, +11 is written as 0011 and to get −11, we subtract each digit from 9, resulting in 9988. Recalling that the two's complement for −11 was 9989, you'll see indeed we have moved all the negative values down and eliminated the asymmetry at zero. It's also evident that since subtracting a decimal digit from 9 is its own inverse, we can invert the sign of any number, positive or negative, by taking its nine's complement. Subtracting each digit of 9988 gives us back 0011. So far, so good; this is looking better and better. Now let's try some arithmetic, for example adding −1 and 10. The one's complement of −1 is 9998, and adding that to 0010, the calculator prints 0008. Uh oh, wrong answer. Examining more closely shows that in nine's complement the carry we so blithely discarded in ten's complement now has to be accounted for. Adding 9998 and 0010 on a piece of paper rather than our busted calculator gives a sum of 10008, with a carry out of the fourth digit. To compute correctly in nine's complement, this carry has to be added back in to the least significant (rightmost) digit of the sum, a process UNIVAC engineers referred to as an “end-around carry”. Adding additional gears to our calculator to handle this carry adds the one carried out to the four low digits of 0008, and now the correct sum, 0009 appears on the tape. Excellent! The problem of shifting negative numbers has also been fixed. Shifting −11 (9988) right one place (again, shifting in a nine if negative) yields 9998, for which the nine's complement (subtracting each digit from 9, remember) is 0001. In fact, this works in all circumstances. The oddity of a negative number with no positive counterpart is also gone; the nine's complement of the largest positive number, 4999 is 5000, which represents −4999 just as you'd expect. From a hardware standpoint, the need for the end-around carry looks bothersome, especially when you consider that it might result in propagating a carry all the way back through a number you've just finished adding. On the other hand, the process of negating a number is simplified. Now we'll move on to binary to see how it works with bits instead of decimal digits. As you'd expect, the binary counterpart of nine's complement is one's complement, and we form the one's complement of a binary number by subtracting each digit from 1. But with binary numbers, that is precisely the same as just replacing all the one bits with zeroes and vice versa! One's complement has eliminated the distinction between logical and arithmetic negation and the need for separate instructions for each operation. In summary, by admitting the added complexity of end-around carry, we have obtained a way of representing negative numbers which is symmetric, in which power-of-two division can be done by shifting for all numbers, and where negating a number and inverting all its bits are one and the same thing. But we've also gotten something else as well: If the one's complement of 1, binary 000000000001, is 111111111110, then what is the one's complement of zero, binary 000000000000? Well, of course that works out to be 111111111111: minus zero! Let's explore the consequences of this, shifting back to the decimal equivalent of nine's complement since that's easier to follow. The nine's complement of 0000 is 9999, decimal minus zero. What happens, for example, when we add minus zero to, say, ten (0010)? The sum of 10 and 9999 is 10009, and performing the end-around carry gets us back to 0010, so minus zero is well behaved in addition and, it turns out, all other arithmetic operations as well. If we add +0 (0000) and −0 (9999) we get 9999, −0, which is still zero, so that's okay as well, if a bit odd at first glance. Maybe if we never start out with minus zero, we can ignore it? Unfortunately, no: consider the case of adding +11 (0011) and −11 (9988). We get zero, sure enough, but it's minus zero, 9999. Now suppose we want to test whether a number is zero, something any program needs to do frequently. That seems to have gotten a bit sticky, since we've seen that minus zero can pop out of an innocuous calculation (due to the way the adder in the 1100 series operated, minus zero was generated under different circumstances than in this simplified example, but it was generated nonetheless). It appears that every time we want to test for zero, we have to see if it's +0 or −0: a real drag. We could always modify the hardware to do this automatically, so that all zero test instructions considered either +0 or −0 to be zero, and this is precisely what the UNIVAC 1100 series (and most other one's complement architectures) did. On a two's complement machine, there's one and only one zero made up of all zero bits, all ones denoting −1, so there is no problem testing for zero. As Prof. William C. Lynch remarked in the heyday of minus zero, “give a programmer a glitch and he'll try to drive a truck through it”, and minus zero was a glitch big enough to roll a whole convoy through, good buddy. Consider the following UNIVAC assembly code (consult the instruction set reference if you're hazy on the op codes), and remember that UNIVAC 1100 test instructions skipped the next instruction in line if the condition was true. LA A0,VALUE1 Load first number LA A1,VALUE2 Load second number TNE A0,A1 Are they equal? J ARE$EQUAL Yes, they are ANA A0,A1 Not equal, huh? Subtract them TNZ A0 Is the result zero? J HOW$DID$THIS$HAPPEN Huh? Unequal but difference is zero ? In order to be useful when comparing arbitrary bit patterns, the Test Equal ( TE) and Test Not Equal ( TNE) and related instructions only consider two values equal if they contain precisely the same bit pattern. Suppose VALUE1 is −0 (777777777777 octal) VALUE2 is +0 (000000000000 octal). You can't get more different than that, can you? Every bit is different, so clearly these values are not equal. But when we subtract the second from the first, since we're subtracting zero from zero we end up with (as it happens, minus, see below for details) zero, and the Test Nonzero ( TNZ) instruction, which considers both +0 and −0 to be zero, fails to skip since the difference in A0 is (minus) This example may seem a bit contrived, but consider the following trap which many a novice 1100 programmer stepped into, especially those who first learned on a two's complement machine. (Enclosing a value in parentheses made a reference to a memory cell containing that value. [Grognards: yes, I remember, and would have coded ,U”, but I don't want to explain that here]). . <Do some computation> TNE A0,(0) Was the result zero? J GOT$ZERO Yes. We're done If the calculation happens to end up with −0, gotcha!…that's not zero according to the bit-by-bit test TNE. Another trap which frequently snared those used to two's complement was confusing “positive” and “negative”, which on a one's complement machine are properties shared by all numbers, including zero. The various instructions which tested positive and negative simply tested whether the most significant bit was zero (positive) or one (negative). This could lead to the puzzle: TP VALUE Is it positive? J IS$NEGATIVE Nope, it's negative TNZ VALUE Is the result zero? J HOW$DID$THIS$HAPPEN Huh? Positive but equal to zero? If you really needed to know if a value was greater than zero, you wrote instead: LA A0,VALUE Load value from memory TLE A0,(0) Less than or equal to zero? J IS$POSITIVE No, then it must be greater than zero But even this had a little added twist: if you compared the two, +0 was greater than −0, leading to the conundrum: LA A0,VALUE1 Load first number LA A1,VALUE2 Load second number ANU A1,A0 Are they equal? (Difference stored in A2) JNZ A2,NOT$EQUAL No, they're not TG A0,A1 Ahhhh, equal. Is A1 > A0, perchance? J YES$BUT$HOW Easy, A1 = +0, A0 = -0. Neat, huh? I could go on and on. In practice, once you learned the fundamental tricks for testing numbers, you could ignore minus zero in most situations. The 1100 architecture helped by being based upon what was called a “subtractive adder”; the fundamental arithmetic operation was subtraction, not addition, which meant that subtracting a number from itself yielded +0, not −0, which greatly reduced the probability −0 would appear in typical computations. Still, any code which, for example, extracted bits from an integer by logical operations or shifting had to be wary of the possibility its “zero argument” might be, in fact, made up of all one bits. It was not uncommon, for example, to see code which wanted to extract the low-order 6 bits of what purported to be an unsigned integer do the following: LA A0,VALUE Load the value AA A0,(0) Add zero (hee hee hee) AND A0,(077) Extract low-order 6 bits Add zero? Welcome to Minus Zero Logic (MZL) which, based on the fine details of the 1100 arithmetic unit, obeyed the following rules when adding zero and zero. Adding zero guarantees that even if we started out with −0, we'll have +0 by the time of the AND, averting confusion between −0 and 63 in the least significant 6 bits. This elementary introduction to the wonders of minus zero takes us only to the loading dock where inspired (and totally daft) UNIVAC programmers departed, applying minus zero to return additional status from subroutines (if the result is zero, an error has occurred; if minus zero, a really awful error), or observing that, assuming +0 means TRUE and −0 FALSE, the result of the subtraction: LA A0,B Load B ANA A0,A Subtract (Add Negative) A gives, in register A0, the result of the logical implication (conditional) function with A as the B the consequent.
http://fourmilab.ch/documents/univac/minuszero.html
13
115
The Network Access Layer it is the lowest layer of the TCP/IP protocol hierarchy. The protocols in this layer provide the means for the system to deliver data to the other device on a directly attached network. It defines how to use the network to transmit an IP diagram. Unlike higher-level protocols, it must know the details of the underlying network to correctly format the data being transmitted to comply with the network constraints. The TCP/IP Network Access Layer can encompass the function of all three lower layers of the OSI reference model Network Layer, Data Link Layer, and Physical Layer. Functions performed at this level include encapsulation of IP datagrams into the frames transmitted by the network, and mapping of IP addresses to the physical addresses used by the network. The network access layer is responsible for exchanging data between a host and the network and for delivering data between two devices on the same network. Node physical addresses are used to accomplish delivery on the local network. TCP/IP has been adapted to a wide variety of network types, including switching, such as X.21, packet switching, such as X.25, Ethernet, the IEEE 802.x protocols, frame relay, etc.. Data in the network access layer encode EtherType information that is used to demultiplex data associated with specific upper-layer protocol stacks. Figure 69 shows processes/applications and protocols that rely on the Network Access Layer for the delivery of data to their counterparts across the network. The Internetwork Layer it is the heart of TCP/IP and the most important protocol. IP provides the basic packet delivery service on which TCP/IP networks are built. All protocols, in the layers above and below IP, use the Internet Protocol to deliver data. All TCP/IP data flows through IP, incoming and outgoing, regardless of its final destination. The Internetwork Layer is responsible for routing messages through internetworks. Devices responsible for routing messages between networks are called gateways in TCP/IP terminology, although the term router is also used with increasing frequency. The TCP/IP protocol at this layer is the Internet Protocol (IP). In addition to the physical node addresses utilised at the network access layer, the IP protocol implements a system of logical host addresses called IP addresses. The IP addresses are used by the internet and higher layers to identify devices and to perform internetwork routing. The Address Resolution Protocol (ARP) enable IP to identify the physical address that matches a given IP address. Internet Protocol (IP): Defining the datagram, which is the basic unit of transmission in the Internet. Defining the Internet addressing scheme, moving data between the Network Access Layer and the Host-to-Host Transport Layer. Routing datagrams to remote hosts. Performing fragmentation and reassembly of datagrams. Is the packet format defined by Internet Protocol. The internet protocol delivers the datagram by checking the Destination Address (DA). This is an IP address that identifies the destination network and the specific host on that network. If the destination address is the address of a host on the local network, the packet is delivered directly to the destination, otherwise the packet is passed to a gateway for delivery. Gateways are devices that switch packets between the different physical networks. Deciding which gateway to use is called routing. IP makes the routing decision for each individual packet. IP deals with data in chunks called datagrams. The terms packet and datagram are often used interchangeably, although a packet is a data link-layer object and a datagram is a network layer object. In many cases, particularly when using IP on Ethernet, a datagram and packet refer to the same chunk of data. There's no guarantee that the physical link layer can handle a packet of the network layer's size. If the media's MTU is smaller than the network's packet size, then the network layer has to break large datagrams down into packed-sized chunks that the data link layer and physical layer can digest. This process is called fragmentation. The host receiving a fragmented datagram reassembles the pieces in the correct order. IP Datagram Format: Figure 70 shows the IP Datagram Format. The field in figure 70 are as follows: Type of Service: Data in this fields indicate the quality of service desired. The effects of values in the precedence fields depend on the network technology employed, and values must be configured accordingly. Format of the Type of Service field: Bit 3 : Delay 0 = normal delay 1 = low delay Bit 4 : Throughput 0 = normal throughput 1 = high throughput Bit 5 : Reliability 0 = normal reliability 1 = high reliability Bits 6-7: Reserved Total Length: The length of the datagram in octets, including the IP header and data. This field enables datagrams to consist of up to 65.535 octets. The standard recommends that all hosts be prepared to receive datagrams of at least 576 octets in length. Identification: An identification field used to aid reassemble of the fragments of a datagram. Flag: If a datagram is fragmented, the MB bit is 1 in all fragments except the last. This field contains three control bits. Bit 0: Reserved, must be 0. Bit 1 (DF): 1 = Do not fragment and 0 = May fragment Bit 2 (MF): 1 = More fragments and 0 = Last fragment Fragment Offset: For fragmented datagrams, indicates the position in the datagram of this fragment. Time-to-live: Indicates the maximum time the datagram may remain on the network. Protocol: The upper layer protocol associated with the data portion of the datagram. Header Checksum: A checksum for the header only. This value must be recalculated each time the header is modified. Source Address: The IP address of the that originated the datagram. Destination Address: The IP address of the host that is the final destination of the datagram. Options: May contain 0 or more options. Padding: Filled with bits to ensure that the size of the header is a 32-bit multiple. Internet gateways are commonly referred to as IP routers because they use Internet Protocol to route packets between networks. Gateways forward packets between networks and hosts don't. However, if a host is connected to more than one network (a multihomed host), it can forward packets between the networks. When a multihomed host forwards packets, it acts just like any other gateway and is considered to be a gateway. Systems can only deliver packets to other devices attached to the same physical network. Figure 71 shows Routing Through Gateways. The hosts (end-systems) process packets through all four protocol layers, while the gateways (intermediate-systems) process the packets only up to the internet layer where the routing decisions are made. As a datagram is routed through different networks, it may be necessary for the IP module in the gateway to divide the datagram into smaller pieces. A datagram received from one network may be to large to be transmitted in a single packet on a different network. This condition only occurs when a gateway interconnects dissimilar physical networks. Each type of network has a Maximum Transmission Unit (MTU), which is the largest packet that it can transfer If the datagram received from one network is longer than the other network's MTU, it is necessary to divide the datagram into smaller fragments for transmission. This process is called fragmentation. Passing Datagrams to the Transport Layer: When IP receives a datagram that is addressed to the local host, it must pass the data portion of the datagram to the correct transport layer protocol. This is done by using the protocol number of the datagram header. Each transport layer protocol has a unique protocol number that identifies it to IP. Internet Control Message Protocol (ICMP): Is part of the internet layer and uses the IP datagram delivery facility to sends its messages. ICMP sends messages that perform control, error reporting, and informational functions for TCP/IP. Figure 72 shows the ICMP Header Format. Flow control: When datagrams arrive to fast for processing, the destination host or intermediate gateway sends an ICMP Source Quench Message back to the sender. This tells the source to temporarily stop sending datagrams. Detecting unreachable destinations: When a destination is unreachable, the system detecting the problem sends an ICMP Destination Unreachable Message to the datagrams source. If the unreachable destination is a network or host, the message is sent by an intermediate gateway. But if the destination is an unreachable port, the destination host sends the message. Redirecting routes: A gateway sends the ICMP Redirect Message to tell a host to use another gateway, presumably because the other gateway is a better choice. This message can only be used when the source host is on the same network as both gateways. Checking remote hosts: A host can send the ICMP Echo Message to see if a remote system's internet protocol is up and operational. When a system receives an echo message, it sends the same packet back to the source host (e.g. PING). Figure 73 shows processes/applications and protocols rely on the Internet Layer for the delivery of data to their counterparts across the network. The Host-to-Host Transport Layer has two major jobs: It must subdivide user-sized data buffers into network layer sized datagrams, and it must enforce any desired transmission control such as reliable delivery. The two most imported protocols in this layer are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). TCP provides reliable data delivery service with end-to-end error detection and correction. UDP provides low-overhead, connectionless datagram delivery service. Both protocols deliver data between the Application Layer and the Internet Layer. Applications programmers can choose whichever service is more appropriate for their specific applications. The Host-to-Host Transport Layer is responsible for end-to-end data integrity. Two protocols are employed at this layer: Transmission control protocol and user datagram protocol. TCP precedes reliable, full-duplex connections and reliable service by ensuring that data is present when transmission result in an error. Also, TCP enables hosts to maintain multiple, simultaneous connections. UDP provides unreliable service that enhances network throughput when error correction is not required at the host-to-host-layer. Protocols defined at this layer accept data from application protocols running at the Application layer, encapsulate it in the protocol header, and deliver the data segment thus formed to the lower IP layer for routing. Unlike the IP protocol, the transport layer is aware of the identity of the ultimate user representative process. As such, the Transport layer, in the TCP/IP suite, embodies what data communications are all about: The delivering of information from an application on one computer to an application on another computer. User Datagram Protocol (UDP): Gives application programs direct access to a datagram delivery service, like the delivery service that IP provides. This allows applications to exchange messages over the network with a minimum of protocol overhead. UDP is an unreliable (it doesn't care about the quality if deliveries it make), connectionless (doesn't establish a connection on behalf of user applications) datagram protocol. Within your computer, UDP will deliver data correctly. UDP is used as a data transport service when the amount of data being transmitted is small, the overhead of creating connections and ensuring reliable delivery may be greater than the work of retransmitting the entire data set. Broadcast-oriented services use UDP, as do those in which repeated, out of sequence, or missed requests have no harmful side effects. Since no state is maintained for UDP transmission, it is ideal for repeated, short operations such as the Remote Procedure Call protocol. UDP packets can arrive in any order. If there is a network bottleneck that drops packets, UDP packets may not arrive at all. It's up to the application built on UDP to determine that a packet was lost, and to resend it if necessary. NFS and NIS are build on top of UDP because of its speed and statelessness. While the performance advantages of a fast protocol are obvious, the stateless nature of UDP is equally important. Without state information in either the client or server, crash recovery is greatly simplified. Figure 74 shows the UDP Datagram Format. The fields in figure 74 are as follows: Source Port (16 bits): This field is optional and specifies the port number of the application that is originating the user data. Destination Port (16 bits): This is the port number pertaining to the destination application. Length (16 bits): This field describes the total length of the UDP datagram, including both data and header information. UDP checksum (16 bits): Integrity checking is optional under UDP. If turned on, this field is used by both ends of the communication channel for data integrity checks. Figure 75 shows the relationship between UDP and IP headers. There are two points to make: What IP considers to be data field is in fact another piece of formatted information including both UDP header and user protocol data. To IP it should not matter what the data field is hiding. The details of the header information for each protocol should clearly convey to the reader purpose of the protocol. Transmission Control Protocol (TCP): Is a fully reliable, connection-oriented, acknowledged, byte stream protocol that provide reliable data delivery across the network and in the proper sequence. TCP supports data fragmentation and reassemble. It also support multiplexing/demultiplexing using source and destination port numbers in much the same way they are used by UDP. TCP provides reliability with a mechanism called Positive Acknowledgement with Retransmission (PAR). Simply stated, a system using PAR sends the data again, unless it hears from the remote system that the data arrived okay. The unit of data exchanged between co-operating TCP modules is called a segment. Figure 76 shows the data segment format of the TCP Protocol. The fields in figure 76 are as follows: Source port (16 bits): Specifies the port on the sending TCP module. Destination port (16 bits): Specifies the port on the receiving TCP module. Sequence number (32 bits): Specifies the sequence position of the first data octet in the segment. When the segment opens a connection, the sequence number is the Initial Sequence Number (ISN) and the first octet in the data field is at sequence ISN+1 Acknowledgement number (32 bits): Specifies the next sequence number that is expected by the sender of the segment. TCP indicates that this field is active by setting the ACK bit, which is always set after a connection is established. Data offset (4 bits): Specifies the number of 32-bit word in the TCP header. Reserved (6 bits): Must be zero. Reserved for future use. Control bits (6 bits): The six control bits are as follow: URG: When set, the Urgent Pointer field is significant ACK : When set, the acknowledgement Number field is significant PSH : Initiates a push function RST : Forces a reset of the connection SYN : Synchronises sequencing counters for the connection. This bit is set when a segment request opening of a connection. FIN : No more data. Closes the connection Window (16 bits): Specifies the number of octets, starting with the octet specified in the acknowledgement number field, which the sender of the segment can currently accept. Checksum (16 bits): An error control checksum that covers the header and data fields. It does not cover any padding required to have the segment consists of an even number of octets. The checksum also covers a 96-pseudoheader, it includes source and destination addresses, the protocol, and the segment length. The information is forwarded with the segment to IP to protect TCP from miss-routed segments. The value of the segment length fields include the TCP header and data, but doesn't include the length of the pseudoheader. Urgent Pointer (16 bits): Identifies the sequence number of the octet following urgent data. The urgent pointer is a positive offset from the sequence number of the segment. Options (variable): Options are available for a variety of functions. Padding (variable): 0-value octets are appended to the header to ensure that the header ends on a 32-bit word boundary. Figure 77 shows the format of the TCP pseudoheader. Each segment contains a checksum that the recipient uses to verify that the data is undamaged. If the data segment is received undamaged, the receiver sends a positive acknowledgement back to the sender. If the data segment is damaged, the receiver discards it. After an appropriate time-out period, the sending TCP module retransmits any segment for which no positive acknowledgement has been received. TCP is connection-oriented. It establishes a logical end-to-end connection between the two communication hosts. Control information, called a handshake, is exchanged between the two endpoints to establish a dialogue before data is transmitted. TCP indicates the control function of a segment by setting the appropriate bit in the flags field of the segment header. Figure 78 shows TCP establishes virtual circuits over which applications exchange data. The type of handshake used by TCP is called a three-way handshake because three segments are exchanged. Figure 79 shows a Three-Way Handshake. Reliability and Acknowledgement: TCP employs the positive acknowledgement with retransmission technique for the purpose of archiving reliability in service. Figure 80 shows the positive acknowledgement with retransmission technique. In figure 80, with a laddergram depicting the events taking place between two hosts. The arrows represent transmitted data and/or acknowledgements, and time is represented by the vertical distance down the ladder. When TCP send a data segment, it requires an acknowledgement from the receiving end. The acknowledgement is used to update the connection state table. An acknowledgement can be positive or negative. An positive acknowledgement implies that the receiving host recovered the data and that it passed the integrity check. A negative acknowledgement implies that the failed data segment needs to be retransmitted. It can be caused by failures such as data corruption or loss. Figure 81 shows how TCP implements a time-out mechanism to keep track of loss segments. In figure 81, what illustrates what happens when a packet is lost on the network and fails to reach its ultimate destination. When a host sends data, it starts a countdown timer. If the timer expires without receiving an acknowledgement, this host assumes that the data segment was lost. Consequently, this host retransmits a duplicate of the failing segment. TCP keep a copy of all transmitted data with outstanding positive acknowledgement. Only after receiving the positive acknowledgement is this copy discarded to make room for other data in its buffer. Data Stream Maintenance: The interface between TCP and a local process is a port, which is a mechanism that enables the process to call TCP and in turn enables TCP to deliver data streams to the appropriate process. Ports are identified by port numbers. To fully specify a connection, the host IP address is appended to the port number. This combination of IP address and port number is called a socket. A given socket number is unique on the internetwork. A connection between two hosts is fully described by the sockets assigned to each end of the connection. Figure 82 shows a TCP Data Stream that starts with an Initial Sequence Number of 0. In figure 82, the receiving system has received and acknowledged 2000 bytes. so the current Acknowledgement Number is 2000. The receiver also has enough buffer space for another 6400 bytes, so it has advertised a Window of 6000. The sender is currently sending a segment of 1000 bytes starting with Sequence Number 4001. The sender has received no acknowledgement for the bytes from 2001 on, but continues sending data as long as it is within the window. If the sender fills the window and receives no acknowledgement of the data previously sent, it will, after an appropriate time-out, send the data again starting from the first unacknowledged byte. Retransmission would start from byte 2001 if no further acknowledgements are received. This procedure ensures that data is reliably received at the far end of the network. From the perspective of the process, communication with the network involves sending and receiving continuous streams of data. The process is not responsible for fragmenting the data to fit lower-layer protocols. Figure 83 shows how data are processed as the travel down the protocol stack, through the network, and up the protocol stack of the receiver. A short explanation of figure 83: TCP receives a stream of data from the upper-layer process TCP may fragment the data stream into segments that meet the maximum datagram size of IP IP may fragment segments as it prepares datagrams that are sized to conform to restrictions of the network. Network protocols transmit the datagram in the form of bits. Network protocols at the receiving host reconstruct datagrams from the bits they receive. IP receives datagrams from the network. Where necessary datagram fragments are reassembled to reconstruct the original segment. TCP presents data in segments to upper-layer protocols in the form of data streams. Figure 84 shows processes/applications and protocols rely on the Transport Layer for the delivery of data to their counterparts across the network. The Process/Application Layer includes all processes that use the transport layer protocols to deliver data. There are many applications protocols. A good example of concerns handled by these process is the reconciliation of differences in the data syntax between the platforms on which the applications are running. It should be clear that unless this difference in data representation is handled properly, any exchange of data involving these processes id likely to yield erroneous interpretations of numerical data. To resolve this issue, and other similar issues, TCP/IP defines the eXternal Data Representation (XDR) protocol. Reflecting on the nature of this problem, you can easily see that the problem has nothing to do with the underlying network topology, wiring, or electrical interference. Some applications that uses TCP: TELNET: The Network Terminal Protocol, provides remote login over the network. FTP: The File Transfer Protocol, is used for interactive file transfer between hosts. SMTP: The Simple Mail Transfer Protocol delivers electronic mail. Some applications that uses UDP: SNMP: The Simple Network Management Protocol, is used to collect management information from network devices. DNS : Domain Name Service, maps IP addresses to the names assigned to network devices. RIP: Routing Information Protocol, routing is the central to the way TCP/IP networks. RIP is used by the network devices to exchange routing information. NFS : Network File System, this protocol allows files to be shared by various hosts on the network as if they were local drives. TCP/IP Protocols Inside a Sample Gateway: Figure 85 shows the TCP/IP Protocols Inside a Sample Gateway. Figure 86 shows processes/applications and protocols rely on the Application Layer for the delivery of data to their counterparts across the network.
http://www.citap.com/documents/tcp-ip/tcpip012.htm
13
52
Navigation Panel: (These buttons explained below) Society Investigating Mathematical April 1998 Feature Presentation Steps for Carrying out a Statistical Hypothesis Test: Hypothesis Testing and the Chi-Squared Test of Independence Alison Gibbs and Martin Van Driel - Identify the null hypothesis. Often, the goal is to show that the null hypothesis is false. - Collect the data. - Calculate a test statistic. A test statistic is a number, calculated from the data, which has a known statistical distribution assuming the null hypothesis is true. - From the distribution of the test statistic, calculate the probability of getting the value we got or a more extreme value. This is the p-value. - If the p-value is "small", we've observed data values that are very unlikely. So there must be something wrong with our assumptions. We have evidence that our null hypothesis is false. - What's "small" enough? Our definition of small is called the significance level of our test. Commonly used values are 0.05 and 0.01. of a random observation is defined as the expected outcome, based on the distribution. For example, if we toss a fair coin four times, then the mean number of heads is 2. Now suppose we repeat the coin tossing experiment 5 times and observe on each experiment 4,2,0,1 and 2 heads respectively. Then the sample mean based on these 5 observations is Trial by Jury || Statistical Hypothesis Testing | Prosecutor || Statistician | Trial || Collection of Data | Jury decides on the verdict || Statistical test | Assume defendant is innocent || Assume the null hypothesis is true | Weigh the evidence provided by || Assess the evidence provided by| testimony and exhibits || the data (as summarized in the test statistic) | assuming defendant is innocent || assuming null hypothesis is true | Evidence against the defendant || Calculate a p-value for the test statistic | assuming defendant is innocent || assuming null hypothesis is true | Defendant found guilty || Reject the null hypothesis if | beyond a reasonable doubt || p-value less than the significance level | The Law of Large Numbers states that as the sample size (number of observations) increases, the sample mean will approach the actual mean. For a population with a standard deviation and mean , we say that the data has a Normal Distribution if 95% of the observations are within 2 standard deviations of the mean, and 68% of the observations are within one standard deviation of the mean, and the mean is also the median. Given observations from a common distribution, for the sample mean the Central Limit Theorem states that as the sample size increases, the distribution of becomes closer to a normal distribution. Also, the distribution of the sum of the random observations, becomes closer to a normal distribution. We will be analyzing count data, for example, the number of women present this evening. The Background Theory The Calculations Behind the Test - A fact from probability theory: If A and B are independent, then the probability of both A and B is the product of the probability of A and the probability of B. - A random variable can be standardized by subtracting its mean and then dividing by its standard deviation. A standardized normal random variable has normal distribution with mean 0 and standard deviation 1. - The square of a standard normal random variable has a distribution with one degree of freedom. The sum of the squares of k standard normal random variables has a distribution with k degrees of freedom. The number of degrees of freedom is a parameter of the distribution. The higher the degrees of freedom, the flatter the - A count can be viewed as the sum of a (binomial) observation that assigns 1 to the observation if it possesses the feature we're interested in, and 0 otherwise. So by the Central Limit Theorem, a count has approximately a normal distribution. - Suppose our counts are grouped in a format such as the following, called a Two-way Contingency Table: where is the observed count that falls into category (i,j). Let n be the total number of people polled (so ). Assume that there is no relationship between gender and newspaper preference. Then applying our fact from probability theory, the probability of a male preferring the Globe and Mail is the proportion of males times the proportion of Globe readers; the expected number of male Globe readers is n times that. Call this expected count in category (i,j): . || Preferred Newspaper | Gender || Globe and Mail || Toronto Star || Toronto Sun | Male || || || | Female || || || | If gender and newspaper are truly independent, has a degrees of freedom, where r is the number of rows in our table and c is the number of columns. Note 1: We lose a degree of freedom each time we treat something as fixed, for example, the total number of males, the total number of Sun readers, etc. Note 2: The distribution of X^2 follows from the above distribution theory, plus some calculation. See, for example, Mathematical Statistics with Applications, by Mendenhall, Wackerly, and Scheaffer. - Our statistical test: The null hypothesis: Gender and newspaper preference are independent. The test statistic: The distribution of the test statistic assuming the null hypothesis is true: with (r-1)(c-1) degrees of freedom. The conclusion: If the probability of getting an that is as large or larger than what we got is small, we have evidence that our null hypothesis is false. The Chance Database: Mendenhall, W., Wackerly, D. and Schaeffer, R. Mathematical Statistics with Applications, 4th edition. PWS-Kent Publishing Company, Boston, 1990. Moore, D. and McCabe. G. Introduction to the Practice of Statistics, 2nd edition. W.H. Freeman and Company, New York, 1993. Statistics Handbook for the TI-83. Texas Instruments Inc., 1997. Paulos, John Allen. Innumeracy: Mathematical Illiteracy and its Consequences. Hill and Wang, New York, 1988. Rice, John A. Mathematical Statistics and Data Analysis, 2nd edition. Wadsworth, Belmont, California, 1995. The SIMMS Project (Systemic Initiative for Montana Mathematics and Science). What Did You Expect, Big Chi?. Simon and Schuster, Houston. - A life insurance company sells a term insurance policy to a 21-year-old male. The policy pays $100,000 if the insured dies within the next 5 years. The company collects a premium of $250 each year. There is a high probability that the man will live, and the insurance company will gain $1250 in premiums. But if he were to die, the company would would lose almost $100,000! Why would the insurance company want to take on this much risk? - In advertising for a study guide, the producers claim that students that use it do significantly better (p<0.05) than students who don't. What does this mean? Is there any reason you many not want to trust the producers' claim? - A researcher is looking for evidence of extra-sensory perception. She tests 500 subjects, 4 of whom do significantly better (p<0.01) than random guessing. Should she conclude that these 4 have ESP? the baseball player Reggie Jackson earn the title "Mr. October"? In his 21-year career he had 2584 hits in 9864 regular season at-bats. During the World Series, he had 35 hits in 98 at bats. Is the improvement in his batting average during the World Series statistically - Does gender influence newspaper preference? Test the hypothesis that there is not relationship between gender and preferred Toronto daily newspaper for the data we've collected: || Newspaper | Gender || Globe || Star || Sun | Male || || || | Female || || || - Here is some more data on Jane Austen and her imitator (from J. Rice, Mathematical Statistics and Data Analysis, 2nd ed.). The following table gives the relative frequency of the word a preceded by (PB) and not preceded by (NPB) the word such, the word and followed by (FB) or not followed by (NFB) I, and the word the preceded by and not preceded by on. Was Austen consistent in these habits of style from one work to another? Did her imitator successfully copy this aspect of her style? Words || Sense and Sensibility || Emma || Sanditon I || Sandition II | a PB such || 14 || 16 || 8 || 2 | a NPB such || 133 || 180 || 93 || 81 | and FB I || 12 || 14 || 12 || 1 | and NFP I || 241 || 285 || 139 || 153 | the PB on || 11 || 6 || 8 || 17 | the NPB on || 259 || 265 || 221 || 204 - Was there block judging in the ice dance competition at the Olympics? Claims have been made that the decision had been determined by the judges before the games even started. In particular, it has been claimed that the judges from the five Eastern bloc countries (Russia, Ukraine, Lithuania, Poland and the Czech Republic) agreed to support each other's competitors. Some also claim that France was part of the Could we test this judging irregularity statistically? How? (Hint: it's not a test!) - Hopefully lots of 21-year-olds buy policies from the insurance company. The law of large numbers guarantees that only a few will die, so premiums collected will more than cover pay-outs. - Assuming that there is no difference between the two groups of students, the probability of seeing a difference as great or greater than that observed is less than 0.05. Of course, we've no indication how the producers of the study guide found students who used it, and students who didn't. Perhaps their claim says more about the students who buy study guides. - One percent of the time we'd expect a person who is guessing randomly to do that well. So in a group of 500, it wouldn't be surprising if 5 people did that well. It's not likely that the 4 really do have ESP. - We can test this by performing a test of independence on the following table: The test statistic has value 4.536 and has a distribution with 1 degree of freedom under the hypothesis of no relationship. The p-value is 0.033. Whether or not the null hypothesis should be rejected depends on the significance level. Assuming there's no relationship between Jackson's batting average and whether or not it's a World Series game, observing a difference as greater or greater than what Jackson accomplished would happen 3% of the time. Do you consider that highly unusual? || Hit || No hit | Regular season || 2584 || 7280 | World Series || 35 || 63 - Up to you! - The test of Austen with herself (taking just the first three columns) has a test statistic of 23.287 which, under the null hypothesis of no relationship between work and word distribution has a distribution with 10 degrees of freedom and a p-value of 0.0097. So it appears that Austen was not consistent in the use of these word combinations! So does it matter what the - An open question! Switch to text-only version (no graphics) Access printed version in PostScript format (requires PostScript printer) Go to SIMMER Home Page Go to The Fields Institute Home Page Go to University of Toronto Mathematics Network
http://www.math.toronto.edu/mathnet/simmer/topic.apr98.html
13
70
Mathematics/8/Geometry 14.1 Students prove the Pythagorean theorem. 15.1 Students use the Pythagorean theorem to determine distance and find missing lengths of sides of right triangles. Science/8/Investigation and Experimentation 9.0 Scientific progress is made by asking meaningful questions and conducting careful investigations. As a basis for understanding this concept and addressing the content in the other three strands, students should develop their own questions and perform investigations. Students will: a. Plan and conduct a scientific investigation to test a hypothesis. b. Evaluate the accuracy and reproducibility of data. c. Distinguish between variable and controlled parameters in a test. d. Recognize the slope of the linear graph as the constant in the relationship y=kx and apply this principle in interpreting graphs constructed from data. e. Construct appropriate graphs from data and develop quantitative statements about the relationships between variables. f. Apply simple mathematic relationships to determine a missing quantity in a mathematic expression, given the two remaining terms (including speed = distance/time, density = mass/volume, force = pressure x area, volume = area x height). g. Distinguish between linear and nonlinear relationships on a graph of data. To understand and apply the Pythagorean Theorem, students will work in groups to make a simulated model of two NFL players setting up a run for the goal line. Students will view a Science of NFL Football video: The Pythagorean Theorem. To understand and apply the Pythagorean Theorem, students will work in groups to make a simulated model of two NFL players setting up a run for the goal line. Students will be able to: Ask scientific questions. Explore the science and math behind using the Pythagorean Theorem in football. Make a simulated model of two NFL players setting up a run for the goal line. Maintain a record of their observations. Use the record of their observations to construct reasonable explanations for questions presented to them. Science of NFL Football video: The Pythagorean Theorem, white board and markers. For each group of students: a paper towel tube cut lengthwise in half to form ramps or two 12-inch wooden rulers with grooves to be used as ramps, two glass marbles or two steelies, a 2-foot square piece of butcher paper, a metric ruler, a protractor, and masking tape. Anticipatory Set (Lead-in): Tell the students, “Many years ago, a Greek philosopher by the name of Pythagoras discovered an amazing fact about triangles; if the triangle had a right angle (90°) and you made a square on each of the three sides, then the biggest square had the exact same area as the other two squares put together! This amazing discovery is called The Pythagorean Theorem. In geometry, the Pythagorean Theorem is the relationship among the three sides of a right triangle. The theorem states that the sum of the squares of the lengths of the two legs of the triangle (sides a and b) is equal to the square of the length of the hypotenuse (c). Also, the sum of the areas of squares with side equal to the legs of the triangle (sides a and b) equals the area of the square with its side equal to the hypotenuse.” “Why is the Pythagorean Theorem useful to know? How can we use this amazing discovery about triangles in our everyday lives? Right triangles are everywhere aren’t they? Anywhere that there is a right triangle is a place where the Pythagorean Theorem could be used. The Pythagorean Theorem has been used in many ways including: playing baseball, constructing a building, and measuring a ramp (like on a moving truck). How many of you like to watch or play football? If you do, the Pythagorean Theorem could help you understand some of what is going on. Today we are going to watch a video about the Pythagorean Theorem and make a really exciting simulated model of two NFL players setting up a run for the goal line.” Tell the students that they are going to make a simulated model of two NFL players setting up a run for the goal line. Tell the students that as they make this model they should keep the following question in mind: What factors affect the ability of the defensive player (LB - line backer) to catch up with the offensive player (WR - wide receiver) before the offensive player reaches the goal line? (Write this question on the board.) Divide the class into groups of 4-5 students. Make sure that each group has a paper towel tube cut lengthwise in half to form ramps or two 12-inch wooden rulers with grooves to be used as ramps, two glass marbles or two steelies, a 2-foot square piece of butcher paper, a metric ruler, a protractor, and masking tape. Next, each group should complete the following steps: a. Draw a horizontal line near the top of the paper; label this “Goal Line”. b. Draw another horizontal line parallel to the Goal Line and about 30 cm below it. Label this the “Action Line”. c. On the left side of the Action Line, mark a spot labeled LB. On the same line, mark another spot about 30 cm to the right and label it WR. d. Draw a line from the LB to the WR, this will form the baseline of the right triangle. e. For the second leg of the triangle, construct a 90-degree angle from the WR to the Goal Line. f. The third leg is the hypotenuse of the triangle which starts at the LB’s spot on the Action Line. The angle that will be formed between the baseline and the hypotenuse is called the “angle of pursuit”. g. Use masking tape to secure one ramp at the 90-degree position on the Action Line headed towards the Goal Line. This is the place where the WR begins his run. h. Use masking tape to secure the second ramp on the Action Line at the place where the LB’s spot is marked, about 30 cm from the WR. i. Elevate the two ramps using textbooks, one under the WR’s ramp and two under the LB’s. j. Put a mark the same distance from the bottom of the ramp about halfway up on both. This is the spot where you will place the marbles to begin their runs. k. Release the two marbles on both ramps at the same time and in the same spot. l. Observe what happens and adjust the angle and the elevation of the LB’s ramp only. The WR is a fixed position; do not change its ramp at all. m. Release the marbles again. Observe and adjust until the objective is achieved: the LB is able to hit the WR before it reaches the Goal Line. n. Use your protractor to measure the angle of pursuit. o. Change the position of the LB on the Action Line and repeat the steps above. When the objective is met; use your protractor to measure the angle of pursuit. Closure (Reflect Anticipatory Set): One person from each group should be chosen from within the group to share with the class the answer to the original question: What factors affect the defensive player (LB - line backer) to catch up with the offensive player (WR - wide receiver) before the offensive player reaches the goal line? Assessments & notes Plan for Independent Practice: Tell the students that they are to work in their same group to answer the following questions about the simulation: How does changing the elevation change the velocity of the LB? How does changing the angle change the velocity of the LB? How does changing the angle change the distance that the LB runs? Measure in centimeters the length of the two legs of the triangle and use the formula a^2 + b^2= c^2 to calculate the hypotenuse (c^2) and compare this calculation with the measured value from your paper. Explain how the angle of pursuit affects the LB’s velocity when the LB’s position changes. For a bigger challenge, use the physics of v = d/t to determine the velocity of each player. NOTE: the time is the same for both players. Set up a ratio and proportion equation and discuss how much faster the LB must run than the WR. Assessment Based on Objectives: Begin the next day’s lesson with the quiz titled, “The Pythagorean Theorem”. (See attached quiz under "Resources") Possible Connections to Other Subjects: Math/Art/Technology: Students could find other applications for the Pythagorean Theorem in the world around them and make a PowerPoint presentation about what they found using detailed written descriptions and illustrations.
http://www.lessonopoly.org/node/11703
13
66
Brown dwarfs are celestial objects ranging in mass between that of large gas giant planets and the lowest mass stars. Unlike stars on the main sequence, a brown dwarf has a mass less than that necessary to maintain hydrogen-burning nuclear fusion reactions in its core. The upper limit of its mass is between 75 (Boss, 2001. Are They Planets or What?) and 80 Jupiter masses (MJ). Alternative names have been proposed, including Planetar and Substar. Currently there is some question regarding what separates a brown dwarf from a giant planet at very low brown dwarf masses (about 13 MJ), and whether brown dwarfs are required to have experienced fusion at some point in their history. In any event, brown dwarfs heavier than 13 MJ do fuse deuterium, and those heavier than about 65 MJ also fuse lithium. The only planet known to orbit a brown dwarf star is 2M1207b. Brown dwarfs, a term coined by Jill Tarter in 1975, were originally called black dwarfs, a classification for dark substellar objects floating freely in space that were too low in mass to sustain stable hydrogen fusion. (The term black dwarf currently refers to a white dwarf that has cooled down so that it no longer emits heat or light.) Early theories concerning the nature of the lowest mass stars and the hydrogen burning limit suggested that objects with a mass less than 0.07 solar masses for Population I objects or objects with a mass less than 0.09 solar masses for Population II objects would never go through normal stellar evolution and would become a completely degenerate star (Kumar 1963). The role of deuterium-burning down to 0.012 solar masses and the impact of dust formation in the cool outer atmospheres of brown dwarfs was understood by the late eighties. They would however be hard to find in the sky, as they would emit almost no light. Their strongest emissions would be in the infrared (IR) spectrum, and ground-based IR detectors were too imprecise for a few decades after that to firmly identify any brown dwarfs. Since those earlier times, numerous searches involving various methods have been conducted to find these objects. Some of those methods included multi-color imaging surveys around field stars, imaging surveys for faint companions to main sequence dwarfs and white dwarfs, surveys of young star clusters, and radial velocity monitoring for close companions. For many years, efforts to discover brown dwarfs were frustrating and searches to find them seemed fruitless. In 1988, however, University of California at Los Angeles professors Eric Becklin and Ben Zuckerman identified a faint companion to GD 165 in an infrared search of white dwarfs. The spectrum of GD 165B was very red and enigmatic, showing none of the features expected of a low-mass red dwarf star. It became clear that GD 165B would need to be classified as a much cooler object than the latest M dwarfs known at that time. GD 165B remained unique for almost a decade until the advent of the Two Micron All Sky Survey (2MASS) when Davy Kirkpatrick, out of the California Institute of Technology, and others discovered many objects with similar colors and spectral features. Today, GD 165B is recognized as the prototype of a class of objects now called "L dwarfs." While the discovery of the coolest dwarf was highly significant at the time it was debated whether GD 165B would be classified as a brown dwarf or simply a very low mass star since observationally it is very difficult to distinguish between the two. Interestingly, soon after the discovery of GD 165B other brown dwarf candidates were reported. Most failed to live up to their candidacy however, and with further checks for substellar nature, such as the lithium test, many turned out to be stellar objects and not true brown dwarfs. When young (up to a gigayear old), brown dwarfs can have temperatures and luminosities similar to some stars, so other distinguishing characteristics are necessary, such as the presence of lithium. Stars will burn lithium in a little over 100 Myr, at most, while most brown dwarfs will never acquire high enough core temperatures to do so. Thus, the detection of lithium in the atmosphere of a candidate object ensures its status as a brown dwarf. In 1995, the study of brown dwarfs changed dramatically with the discovery of three incontrovertible substellar objects, some of which were identified by the presence of the 6708 Li line. The most notable of these objects was Gliese 229B which was found to have a temperature and luminosity well below the stellar range. Remarkably, its near-infrared spectrum clearly exhibited a methane absorption band at 2 micrometers, a feature that had previously only been observed in gas giant atmospheres and the atmosphere of Saturn's moon, Titan. Methane absorption is not expected at the temperatures of main-sequence stars. This discovery helped to establish yet another spectral class even cooler than L dwarfs known as "T dwarfs" for which Gl 229B is the prototype. The standard mechanism for star birth is through the gravitational collapse of a cold interstellar cloud of gas and dust. As the cloud contracts it heats up. The release of gravitational potential energy is the source of this heat. Early in the process the contracting gas quickly radiates away much of the energy, allowing the collapse to continue. Eventually, the central region becomes sufficiently dense to trap radiation. Consequently, the central temperature and density of the collapsed cloud increases dramatically with time, slowing the contraction, until the conditions are hot and dense enough for thermonuclear reactions to occur in the core of the protostar. For most stars, gas and radiation pressure generated by the thermonuclear fusion reactions within the core of the star will support it against any further gravitational contraction. Hydrostatic equilibrium is reached and the star will spend most of its lifetime burning hydrogen to helium as a main-sequence star. If, however, the mass of the protostar is less than about 0.08 solar mass, normal hydrogen thermonuclear fusion reactions will not ignite in the core. Gravitational contraction does not heat the small protostar very effectively, and before the temperature in the core can increase enough to trigger fusion, the density reaches the point where electrons become closely packed enough to create quantum electron degeneracy pressure. According to the brown dwarf interior models, typical conditions in the core for density, temperature and pressure are expected to be the following: - ρc˜10 − 103g / cm3 Further gravitational contraction is prevented and the result is a "failed star," or brown dwarf that simply cools off by radiating away its internal thermal energy. Distinguishing high mass brown dwarfs from low mass stars Lithium: Lithium is generally present in brown dwarfs but not in low-mass stars. Stars, which achieve the high temperature necessary for fusing hydrogen, rapidly deplete their lithium. This occurs by a collision of Lithium-7 and a proton producing two Helium-4 nuclei. The temperature necessary for this reaction is just below the temperature necessary for hydrogen fusion. Convection in low-mass stars ensures that lithium in the whole volume of the star is depleted. Therefore, the presence of the lithium line in a candidate brown dwarf's spectrum is a strong indicator that it is indeed substellar. The use of lithium to distinguish candidate brown dwarfs from low-mass stars is commonly referred to as the lithium test, and was pioneered by Rafael Rebolo and colleagues. - However, lithium is also seen in very young stars, which have not yet had a chance to burn it off. Heavier stars like our sun can retain lithium in their outer atmospheres, which never get hot enough for lithium depletion, but those are distinguishable from brown dwarfs by their size. - Contrariwise, brown dwarfs at the high end of their mass range can be hot enough to deplete their lithium when they are young. Dwarfs of mass greater than 65 MJ can burn off their lithium by the time they are half a billion years old[Kulkarni], thus this test is not perfect. Methane: Unlike stars, older brown dwarfs are sometimes cool enough that over very long periods of time their atmospheres can gather observable quantities of methane. Dwarfs confirmed in this fashion include Gliese 229B. Luminosity: Main sequence stars cool, but eventually reach a minimum luminosity which they can sustain through steady fusion. This varies from star to star, but is generally at least 0.01 percent the luminosity of our Sun. Brown dwarfs cool and darken steadily over their lifetimes: sufficiently old brown dwarfs will be too faint to be detectable. Distinguishing low mass brown dwarfs from high mass planets A remarkable property of brown dwarfs is that they are all roughly the same radius, more or less the radius of Jupiter. At the high end of their mass range (60-90 Jupiter masses), the volume of a brown dwarf is governed primarily by electron degeneracy pressure, as it is in white dwarfs; at the low end of the range (1-10 Jupiter masses), their volume is governed primarily by Coulomb pressure, as it is in planets. The net result is that the radii of brown dwarfs vary by only 10-15 percent over the range of possible masses. This can make distinguishing them from planets difficult. In addition, many brown dwarfs undergo no fusion; those at the low end of the mass range (under 13 Jupiter masses) are never hot enough to fuse even deuterium, and even those at the high end of the mass range (over 60 Jupiter masses) cool quickly enough that they no longer undergo fusion after some time on the order of 10 million years. However, there are other ways to distinguish dwarfs from planets: Density is a clear giveaway. Brown dwarfs are all about the same radius; so anything that size with over 10 Jupiter masses is unlikely to be a planet. X-ray and infrared spectra are telltale signs. Some brown dwarfs emit X-rays; and all "warm" dwarfs continue to glow tellingly in the red and infrared spectra until they cool to planetlike temperatures (under 1000 K). Some astronomers believe that there is in fact no actual black-and-white line separating light brown dwarfs from heavy planets, and that rather there is a continuum. For example, Jupiter and Saturn are both made out of primarily hydrogen and helium, like the Sun. Saturn is nearly as large as Jupiter, despite having only 30% the mass. Three of the giants in our solar system (Jupiter, Saturn, and Neptune) emit more heat than they receive from the Sun. And all four giant planets have their own "planetary systems"—their moons. In addition, it has been found that both planets and brown dwarfs can have eccentric orbits. Currently, the International Astronomical Union considers objects with masses above the limiting mass for thermonuclear fusion of deuterium (currently calculated to be 13 Jupiter masses for objects of solar metallicity) to be a brown dwarf, whereas those objects under that mass (and orbiting stars or stellar remnants) are considered planets.(IAU Working Group on Extrasolar Planets: Definition of a "Planet") Classification of brown dwarfs The defining characteristic of spectral class M, the coolest type in the long-standing classical stellar sequence, is an optical spectrum dominated by absorption bands of titanium oxide (TiO) and vanadium oxide (VO) molecules. However, GD 165B, the cool companion to the white dwarf GD 165 had none of the hallmark TiO features of M dwarfs. The subsequent identification of many field counterparts to GD 165B ultimately led Kirkpatrick and others to the definition of a new spectral class, the L dwarfs, defined in the red optical region not by weakening metal-oxide bands (TiO, VO), but strong metal hydride bands (FeH, CrH, MgH, CaH) and prominent alkali lines (Na I, K I, Cs I, Rb I). As of April 2005, over 400 L dwarfs have been identified (see link in references section below), most by wide-field surveys: the Two Micron All Sky Survey (2MASS), the Deep Near Infrared Survey of the Southern Sky (DENIS), and the Sloan Digital Sky Survey (SDSS). As GD 165B is the prototype of the L dwarfs, Gliese 229B is the prototype of a second new spectral class, the T dwarfs. Whereas near-infrared (NIR) spectra of L dwarfs show strong absorption bands of H2O and carbon monoxide (CO), the NIR spectrum of Gliese 229B is dominated by absorption bands from methane (CH4), features that were only found in the giant planets of the solar system and Titan. CH4, H2O, and molecular hydrogen (H2) collision-induced absorption (CIA) give Gliese 229B blue near-infrared colors. Its steeply sloped red optical spectrum also lacks the FeH and CrH bands that characterize L dwarfs and instead is influenced by exceptionally broad absorption features from the alkali metals Na and K. These differences led Kirkpatrick to propose the T spectral class for objects exhibiting H- and K-band CH4 absorption. As of April 2005, 58 T dwarfs are now known. NIR classification schemes for T dwarfs have recently been developed by Adam Burgasser and Tom Geballe. Theory suggests that L dwarfs are a mixture of very low-mass stars and sub-stellar objects (brown dwarfs), whereas the T dwarf class is composed entirely of brown dwarfs. The majority of flux emitted by L and T dwarfs is in the 1 to 2.5 micrometre near-infrared range. Low and decreasing temperatures through the late M, L, and T dwarf sequence result in a rich near-infrared spectrum containing a wide variety of features, from relatively narrow lines of neutral atomic species to broad molecular bands, all of which have different dependencies on temperature, gravity, and metallicity. Furthermore, these low temperature conditions favor condensation out of the gas state and the formation of grains. Typical atmospheres of known brown dwarfs range in temperature from 2200 down to 750 K (Burrows et al. 2001). Compared to stars, which warm themselves with steady internal fusion, brown dwarfs cool quickly over time; more massive dwarfs cool more slowly than less massive ones. Coronographs have recently been used to detect faint objects orbiting bright visible stars, including Gliese 229B. Sensitive telescopes equipped with charge-coupled devices (CCDs) have been used to search distant star clusters for faint objects, including Teide 1. Wide-field searches have identified individual faint objects, such as Kelu-1 (30 ly away) - 1995: First brown dwarf verified. Teide 1, an M8 object in the Pleiades cluster, is picked out with a CCD in the Spanish Observatory of Roque de los Muchachos of the Instituto de Astrofísica de Canarias. - First methane brown dwarf verified. Gliese 229B is discovered orbiting red dwarf Gliese 229A (20 ly away) using an adaptive optics coronagraph to sharpen images from the 60 inch (1.5 m) reflecting telescope at Palomar Observatory on Southern California's Mount Palomar; followup infrared spectroscopy made with their 200 inch (5 m) Hale telescope shows an abundance of methane. - 1998: First X-ray-emitting brown dwarf found. Cha Halpha 1, an M8 object in the Chamaeleon I dark cloud, is determined to be an X-ray source, similar to convective late-type stars. - December 15, 1999: First X-ray flare detected from a brown dwarf. A team at the University of California monitoring LP 944-20 (60 Jupiter masses, 16 ly away) via the Chandra X-ray observatory, catches a 2-hour flare. - 27 July 2000: First radio emission (in flare and quiescence) detected from a brown dwarf. A team of students at the Very Large Array reported their observations of LP 944-20 in the March 15, 2001 issue of the British journal Nature. Recent observations of known brown dwarf candidates have revealed a pattern of brightening and dimming of infrared emissions that suggests relatively cool, opaque cloud patterns obscuring a hot interior that is stirred by extreme winds. The weather on such bodies is thought to be extremely violent, comparable to but far exceeding Jupiter's famous storms. X-ray flares detected from brown dwarfs since late 1999 suggest changing magnetic fields within them, similar to those in very low-mass stars. A brown dwarf Cha 110913-773444, located 500 light years away in the constellation Chamaeleon, may be in the process of forming a mini solar system. Astronomers from Pennsylvania State University have detected what they believe to be a disk of gas and dust similar to the one hypothesized to have formed our own solar system. Cha 110913-773444 is the smallest brown dwarf found to date (8 Jupiter masses), and if it formed a solar system, it would be the smallest known object to have one. Check their findings in the "Letters" section of the Dec. 10, 2006, issue of the Astrophysical Journal (Letters). Some notable brown dwarfs - 2M1207 - first brown dwarf discovered with a planetary mass in orbit about it - WD0137-349 B - first confirmed brown dwarf to have survived the primary's red giant phase (Maxted et al. 2006, Nature, 442, 543). - Some astronomers have suggested that the Sun may be orbited by an as-yet-unobserved brown dwarf (sometimes called Nemesis), which interacts with the Oort cloud and may have helped shape the position of the dwarf planets.(Whitmire and Jackson. 1984, 71) (Muller 2004, 659-665). Some brown dwarfs are listed below, along with their significance and characteristics. |Title||Brown Dwarf Name||Spectral Type||RA/Dec||Constellation||Notes| |First discovered||Gliese 229 B||T6.5||06h10m34.62s -21°51'52.1"||Lepus||Discovered 1995| |First directly imaged||Gliese 229 B||T6.5||06h10m34.62s -21°51'52.1"||Lepus||Discovered 1995| |First verified||Teide 1||M8||3h47m18.0s +24°22'31"||Taurus||1995| |First with planemo||2MASSW J1207334-393254||M8||12h07m33.47s -39°32'54.0"||Centaurus| |First with a dust disk| |First with bipolar outflow| |First field type (solitary)||Teide 1||M8||3h47m18.0s +24°22'31"||Taurus||1995| |First as a companion to a normal star||Gliese 229 B||T6.5||06h10m34.62s -21°51'52.1"||Lepus||1995| |First as a companion to a white dwarf| |First as a companion to a neutron star| |First in a multi-star system| |First binary brown dwarf||Epsilon Indi Ba, Bb ||T1 + T6||Indus||Distance: 3.626pc| |First trinary brown dwarf||DENIS-P J020529.0-115925 A/B/C||L5, L8 and T0||02h05m29.40s -11°59'29.7"||Cetus||Delfosse et al 1997, [mentions]| |First halo brown dwarf||2MASS J05325346+8246465||sdL7||05h32m53.46s +82°46'46.5"||Gemini||Adam J. Burgasser, et al. 2003| |First Late-M spectra||Teide 1||M8||3h47m18.0s +24°22'31"||Taurus||1995| |First L spectra| |First T spectra||Gliese 229 B||T6.5||06h10m34.62s -21°51'52.1"||Lepus||1995| |Latest T spectrum||ULAS J0034-00||T8.5||Cetus||2007| |First mistaken as a planet| |First X-ray-emitting||Cha Halpha 1||M8||Chamaeleon||1998| |First X-ray flare||LP 944-20||M9V||03h39m35.22s -35°25'44.1"||Fornax||1999| |First radio emission (in flare and quiescence)||LP 944-20||M9V||03h39m35.22s -35°25'44.1"||Fornax||2000| |Title||Brown Dwarf Name||Spectral Type||RA/Dec||Constellation||Notes| |Metal-poor||2MASS J05325346+8246465||sdL7||05h32m53.46s +82°46'46.5"||Gemini||distance is ~10-30pc, metallicity is 0.1-0.01ZSol| |Smallest||Cha 110913-773444||L||11h09m13.63s -77°34'44.6" |Distance: 163ly (50pc), 1.8 RJupiter| |Furthest to primary star| |Nearest to primary star| |Nearest||Epsilon Indi Ba, Bb ||T1 + T6||Indus||Distance: 3.626pc| |Nearest binary||Epsilon Indi Ba, Bb ||T1 + T6||Indus||Distance: 3.626pc| |Coolest||ULAS J0034-00||T8.5||Cetus||600-700°K; ~50ly; Gemini Observatory| - (The above lists are partial and need to be expanded.) - The Astrophysical Journal Dec. 10, 2006, issue Letters. Retrieved February 23, 2008. - Boss, Alan. 2001. Are They Planets or What?. Carnegie Institution of Washington. Retrieved September 20, 2007. - IAU Working Group on Extrasolar Planets: Definition of a "Planet". IAU position statement. Retrieved September 20, 2007. - Kumar, S.S. 1969. Low-Luminosity Stars. London, UK: Gordon and Breach. (An early overview paper on brown dwarfs.) - Maxted et al. 2006.www.nature.com. - Metchev, Stanimir A. 2006. Brown Dwarf Companions to Young Solar Analogs: An Adaptive Optics Survey Using Palomar And Keck. Dissertation.com. ISBN 158112290X. - Muller, Richard A. 2004. Geological Society of America. Special Paper 356: 659-665. - Rebolo, Rafael, Maria Rosa Zapatero-Osorio, R. Rebolo, and M. R. Zapatero-Osorio. 2001. Very Low-Mass Stars and Brown Dwarfs. Cambridge, UK: Cambridge University Press. ISBN 0521663350. - Reid, Neil, and Suzanne L. Hawley. 2005. New Light on Dark Stars: Red Dwarfs, Low-Mass Stars, Brown Stars. Springer Praxis Books / Astrophysics and Astronomy. New York, NY: Springer. ISBN 3540251243. - Whitmire, Daniel P., and Albert A. Jackson. 1984. Nature 308:713. All links retrieved March 8, 2013. - A current list of L and T dwarfs. - Neill Reid's pages at the Space Telescope Science Institute: - First X-ray from brown dwarf observed, Spaceref.com, 2000. - Brown Dwarfs and ultracool dwarfs (late-M, L, T) - D. Montes, UCM. - Wild Weather: Iron Rain on Failed Stars - scientists are investigating astonishing weather patterns on brown dwarfs, Space.com, 2006. - NASA Brown dwarf detectives - Detailed information in a simplified sense. - Discovery Narrows the Gap Between Planets and Brown Dwarfs, 2007. - Deacon, N.R., N.C. Hambly. 2006. Y-Spectral class for Ultra-Cool Dwarfs. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/entry/Brown_dwarf
13
59
The amount of lift generated by an object depends on a number of factors, of the air, the velocity between the object and the viscosity and compressibility of the air, surface area over which the air flows, the of the body, and the body's inclination to the flow, also called the angle of attack. In general, the dependence on body shape, inclination, air viscosity, and compressibility is very complex. One way to deal with complex dependencies is to characterize the dependence by a single variable. For lift, this variable is called the lift coefficient, designated "Cl". For given air conditions, shape, and inclination of the object, we have to determine a value for Cl to determine the lift. For some simple flow conditions and geometries, and low inclinations, aerodynamicists can now determine the value of Cl mathematically. But, in general, this parameter is determined experimentally using For thin airfoils, at small angles of attack, the lift coefficient is approximately two times pi (3.14159) times the angle of attack expressed in radians. Cl = 2 * pi * angle (in radians) The modern lift equation states that lift is equal to the lift coefficient (Cl) times the density of the air (r) times half of the square of the velocity(V) times the wing area (A). L = .5 * Cl * r * V^2 * A By the time the Wrights began their studies, it had been determined that lift depends on the square of the velocity and varies linearly with the surface area of the object. Early aerodynamicists characterized the dependence on the properties of the air by a pressure coefficient called Smeaton's coefficient which represented the pressure force (drag) on a one foot square flat plate moving at one mile per hour through the air. They believed that any object moving through the air converted some portion of the pressure force into lift, and they derived a different version of the which expressed this relationship. Today we know that the lift varies linearly with the density of the air. Near sea level the value is .00237 slugs/cu ft, or 1.229 kg/cu m, but the value changes with air temperature and pressure. The pressure and temperature vary in a rather complex way with The linear variation with density and the variation with the square of the velocity suggests a variation with the dynamic pressure which we So modern aerodynamicists include a factor of 1/2 into the definition of the modern lift equation to reference the aerodynamic forces to the dynamic pressure (1/2 density times velocity squared). NOTICE: The modern lift equation and the lift equation used by the Wright brothers in 1900 are slightly different. The lift coefficient of the modern equation is referenced to the dynamic pressure of the flow, while the lift coefficient of the earlier times was referenced to the drag of an equivalent flat plate. So the value of these two coefficients would be different even for the same wing and the same set of flow conditions. Using the modern lift equation, and the lift coefficient given above, one can calculate the amount of lift produced at a given velocity for a given wing area. Or, for a given velocity, you can determine how big to make the wings to lift a certain weight. Here's a Java program that you can use to investigate the designs of the Wright aircraft from 1900 to 1905 which uses the modern lift equation. You can download your own copy of this applet by pushing the following button: The program is downloaded in .zip format. You must save the file to disk and then "Extract" the files. Click on "Lift.html" to run the program off-line. You can change the values of the velocity, angle of attack, temperature, pressure, and wing area by using the sliders below the airfoil graphic, or by backspacing, typing in your value, and hitting "Return" inside the input box next to the slider. By using the drop menu labeled "Aircraft" you can choose to investigate any of the Wright aircraft from 1900 to 1905. At the right bottom you will see the calculated lift and to the right of the lift is the weight of the The aircraft designated "-K" are kites and the weight does not include a pilot. The aircraft designated "-G" are gliders and the weight does include For design purposes, you can hold the wing area constant and vary the speed and angle of attack, or hold the speed constant and vary the wing area and angle of attack by using the drop menu next to the aircraft selection. In this simulation, the change in weight due to change in wing area has been you can choose to have a plot of the lift or the lift coefficient by using the drop menu. You can plot lift versus angle of attack, velocity or wing area by pushing the appropriate button below the graph. You can perform the calculations in either English or metric units by using the drop menu labeled "Units". Finally you can turn on a "Probe" which you can move around the airfoil to display the local value of velocity of pressure. You must select which value to display by pushing a button and you move the probe by using the sliders located around the gage. Select an aircraft and then find the flight conditions that produce a lift greater than the weight. You can check with the individual aircraft pages to see how big the Wrights designed their wings. Remember that determining the lift is only a part of the design problem. You will find that a higher angle of attack produces more lift. But it also produces more lift to drag ratio is an efficiency factor for the aircraft and directly related to the The Wrights were aware that they needed both high lift and low drag (which they called "drift"). You will also find that increasing the wing area increases the lift. But in the total design, increasing wing area also increases the weight. NOTICE: In this simple program we have approximated the entire aircraft (both wings and the canard) by a single flat plate. So you can expect that our answer is only going to be a very rough estimate. Engineers used to call this a "back of the envelope" answer, since it is based on simple equations which you can solve quickly. Engineers still use these kinds of approximations to get an initial idea of the solution to a problem. But they then perform a more exact (usually longer, harder, and more expensive) to get a more precise answer. You can view a short of "Orville and Wilbur Wright" discussing the lift force and how it affected the flight of their aircraft. The movie file can be saved to your computer and viewed as a Podcast on your podcast player.
http://wright.nasa.gov/airplane/lifteq.html
13
51
logging in or signing up Statistics Homework help, Chemistry Homework help, Math Homework help, marka7906 Download Post to : URL : Related Presentations : Share Add to Flag Embed Email Send to Blogs and Networks Add to Channel Uploaded from authorPOINT lite Insert YouTube videos in PowerPont slides with aS Desktop Copy embed code: Embed: Flash iPad Dynamic Copy Does not support media & animations Automatically changes to Flash or non-Flash embed WordPress Embed Customize Embed URL: Copy Thumbnail: Copy The presentation is successfully added In Your Favorites. Views: 45 Category: Education License: All Rights Reserved Like it (0) Dislike it (0) Added: March 29, 2011 This Presentation is Public Favorites: 0 Presentation Description No description available. Comments Posting comment... Premium member Presentation Transcript Onlinetutorsite Inc Welcomes You To The World Of Math's: Onlinetutorsite Inc Welcomes You To The World Of Math's In the world of shapes, there exists simple truths. From every angle, there are rules. These rules, if followed, will bring you correct answers & great happiness . In math, all secrets are revealed.Geometry is the study of figures.: Onlinetutorsite.com Geometry is the study of figures. Plane geometry studies figures in a flat, two dimensional space called a plane. Polygons-Triangles, Quadrilaterals…. Perimeter and area Circles Solid geometry studies figures in a three-dimensional space. Coordinate Geometry Volume1. POINTS : Onlinetutorsite.com 1. POINTS 2.LINES AND THE ANGLESTHEY FORM 3. INTERSECTING LINES AND RULES ABOUT ANGLES FORMED A LINE IS a continuous set of points having 1 dimension length. A POINT has no dimension and only position. A LINE SEGMENT IS a part of a line ANGLE DEFINED: space formed when two lines meet at a point, Parallel lines ANGLE RELATIONSHIPS (adjacent/complementary) Intersecting lines (vertical/supplementary angles) 3 WAYS TO DESCRIBE AN ANGLEINTERSECTING/PARALLEL : Onlinetutorsite.com Angle relationships for intersecting lines Angle relationships for parallel lines 1) Angles opposite Each Other Are Equal and Called Vertical Angles Corresponding Angles Defined 2) Angles Adjacent to Each Other Are Supplementary Parallel Lines Are Defined Perpendicular Lines Intersect Are Right Angles Alternate Interior Angles formed by transverse lines. INTERSECTING/PARALLELPolygons: Plane Closed Figure Made Up Of Straight Line Segments.: Onlinetutorsite.com Remember type is defined by number of sides…3/triangle, 4/quadrilateral, 5/pentagon, 6/ hexogon … Differences Shared Principles Similar verses Congruent: Angles equal/sides proportional vs. angles equal but sides same size and shape Sum of the Angles= (N-2) X 180 Each angle is #/N (because number of sides=number of angles.) Polygons : P lane Closed Figure Made Up Of Straight Line Segments.Angles equal/sides proportional vs. Angles equal but sides same size and shape equal but sides same size and shape: Onlinetutorsite.com Angles equal/sides proportional vs. Angles equal but sides same size and shape equal but sides same size and shape Similar vs. Congruent PolygonsCircles : A plane closed figure formed by a set of points equidistant from a fixed point called the center.: Onlinetutorsite.com Circles : A plane closed figure formed by a set of points equidistant from a fixed point called the center. Important terms Circumference: boundary Radius: distance from center to any point on circumference. Diameter-line segment passing through the center and ending on both ends at circ. Chord: line segment having both endpoints on circ….longest one is diameter Secant: line passing through the circle interesting it at 2 points Tangent: line intersecting the circle at only one point. Radius from that point is perpendicular to the tangent. Arc: a part of the circ. Semi- Cirlce - an arc that is ½ the circumference. Sector: The interior part of circle bordered by two radii and the arc they intercept. Central Angle: verses inscribed angle…center point vs. point on circumference. What is pi? Ratio Circ: diameter of a circle! So pi X diameter=circ The area of a circle: A= pi r squared Pi= 3.14 or 3.1 or 3 1/7 or 22/7Coordinate Geometry: Onlinetutorsite.com Coordinate Geometry Locates geometric figures on planes via Cartesian Coordinate System. If only x and y then it is two dimensions. If x, y and z then it is no longer on a plane but in space, so it is three dimensional. With x and y only, then two planes are divided into 4 quadrants. (I, 11, 111 and 1V) Distance Sq.: (x2-x1)squared + (y1-y2)squared So the square root of the dist= square root of above Midpoint x1 + x2 /2, y1+y2/2= x midp ..y midp . Slope: m = y2-y1/X2 - X1Volumes on Volumes: Onlinetutorsite.com Volumes on Volumes Shape Cubes Cylinders Pyramids Cones Sphere Formula V= e cubed V=pi r squared h V=1/3lwh V=1/3 pi r squared h V=4/3 pi r cubedNOW THE TRUTH: Onlinetutorsite.com NOW THE TRUTH Revealed. If You Missed It, Then You Are Only Human!Thank You: Onlinetutorsite.com Thank You For more details visit : www.onlinetutorsite.com You do not have the permission to view this presentation. In order to view it, please contact the author of the presentation.
http://www.authorstream.com/Presentation/marka7906-919194-statistics-homework-help-chemistry-math/
13
51
Introduction To Statistics An introduction to statistics for sociology. Statistical Terms You Should Know Some of the major statistical terms used in sociology journals and texts. Levels of Measurement Level of measurement refers to the way that a variable is measured. There are four main levels of measurement that variables can have: nominal, ordinal, interval, and ratio. Descriptive vs. Inferential Statistics Statistical procedures can be divided into two major categories: descriptive statistics and inferential statistics. This article discusses the differences between the two. Measures of Central Tendency Measures of central tendency are numbers that describe what is average or typical of the distribution of data. There are three main measures of central tendency: mean, median, and mode. A normal distribution is a theoretical idea that is based on theory rather than real data. Normal distributions are typically the goal and the ideal in research and data and something that every researcher strives for. Confidence Intervals And Confidence Levels A confidence interval is a measure of estimation. It is an estimated range of values that is likely to include the population parameter being calculated. A confidence level is a measure of how accurate the confidence interval is. Variance and Standard Deviation Variance and standard deviation are two closely related measures of variation that you will hear a lot in studies, journals, or statistics class. They are two basic and fundamental concepts in statistics that must be understood in order to understand most other statistics concepts or procedures. Crosstabs are a great way to familiarize yourself with the data you are working with and to get a rough idea of how the variables in your data set are related, if at all. Crosstabs are useful for exploring the data, exploring relationships in your data, and determining future analyzes. Correlation analysis is useful for determining the direction and strength of a relationship between two variables. Logistic regression is a common statistical technique used in sociological studies. It provides a method for modeling a binary response variable, which takes values 0 and 1. Analysis of Variance (ANOVA) Analysis of Variance, or ANOVA for short, is a statistical test that looks for significant differences between means. Linear Regression Analysis Linear regression is a statistical technique that is used to learn more about the relationship between an independent (predictor) variable and one or more dependent (criterion) variables. Principal Components and Factor Analysis Principal components analysis (PCA) and factor analysis (FA) are statistical techniques used for data reduction or structure detection. Structural Equation Modeling Structural equation modeling is an advanced statistical technique that has many layers and many complex concepts. This article provides a very general overview of the method. Survival analysis, also known as event history analysis, is a class of statistical methods for studying the occurrence and timing of events. These methods are most often applied to the study of deaths, however they are also extremely useful in studying many different kinds of events in both the social and natural sciences. Presenting Data in Graphic Form Graphs tell a story with visuals rather than in words or numbers and can help readers understand the substance of the findings rather than the technical details behind the numbers. Learn about the different types of graphs used in social science research. Researchers are often looking for ways to organize observed data into meaningful structures or classifications. Cluster analysis is one way to do that. Lambda and Gamma Lambda and gamma are two measures of association that are commonly used in social science statistics and research. Lambda is used for nominal variables while gamma is used for ordinal variables. Index of Qualitative Variation (IQV) The index of qualitative variation (IQV) is a measure of variability for nominal variables, such as race, ethnicity, or gender. It is based on the ratio of the total number of differences in the distribution to the maximum number of possible differences within the same distribution. Learn more about the IQV, including how to calculate it.
http://sociology.about.com/od/Statistics/Statistics.htm
13
123
This is part of the Millenium Ecosystem Assessment report Ecosystems and Human Well-Being Synthesis How have ecosystems changed? The structure of the world’s ecosystems changed more rapidly in the second half of the twentieth century than at any time in recorded human history, and virtually all of Earth’s ecosystems have now been significantly transformed through human actions. The most significant change in the structure of ecosystems has been the transformation of approximately one quarter (24%) of Earth’s terrestrial surface to cultivated systems (C26.1.2). (See Box 1.1.) More land was converted to cropland in the 30 years after 1950 than in the 150 years between 1700 and 1850 (C26). Between 1960 and 2000, reservoir storage capacity quadrupled (C7.2.4); as a result, the amount of water stored behind large dams is estimated to be three to six times the amount held by natural river channels (this excludes natural lakes) (C7.3.2). (See Figure 1.1.) In countries for which sufficient multi-year data are available (encompassing more than half of the present-day mangrove area), approximately 35% of mangroves were lost in the last two decades (C19.2.1). Roughly 20% of the world’s coral reefs were lost and an additional 20% degraded in the last several decades of the twentieth century (C19.2.1). Box 1.1 and Table 1.1 summarize important characteristics and trends in different ecosystems. Although the most rapid changes in ecosystems are now taking place in developing countries, industrial countries historically experienced comparable rates of change. Croplands expanded rapidly in Europe after 1700 and in North America and the former Soviet Union particularly after 1850 (C26.1.1). Roughly 70% of the original temperate forests and grasslands and Mediterranean forests had been lost by 1950, largely through conversion to agriculture (C4.4.3). Historically, deforestation has been much more intensive in temperate regions than in the tropics, and Europe is the continent with the smallest fraction of its original forests remaining (C21.4.2). However, changes prior to the industrial era seemed to occur at much slower rates than current transformations. Box 1.1. Characteristics of the World’s Ecological Systems We report assessment findings for 10 categories of the land and marine surface, which we refer to as “systems”: forest, cultivated, dryland, coastal, marine, urban, polar, inland water, island, and mountain. Each category contains a number of ecosystems. However, ecosystems within each category share a suite of biological, climatic, and social factors that tend to be similar within categories and differ across categories. The MA reporting categories are not spatially exclusive; their areas often overlap. For example, transition zones between forest and cultivated lands are included in both the forest system and cultivated system reporting categories. These reporting categories were selected because they correspond to the regions of responsibility of different government ministries (such as agriculture, water, forestry, and so forth) and because they are the categories used within the Convention on Biological Diversity. Marine, Coastal, and Island Systems Urban, Dryland, and Polar Systems Source: Millennium Ecosystem Assessment Inland Water and Mountain Systems The ecosystems and biomes that have been most significantly altered globally by human activity include marine and freshwater ecosystems, temperate broadleaf forests, temperate grasslands, Mediterranean forests, and tropical dry forests. (See Figure 1.2 and C18, C20.) Within marine systems, the world’s demand for food and animal feed over the last 50 years has resulted in fishing pressure so strong that the biomass of both targeted species and those caught incidentally (the “bycatch”) has been reduced in much of the world to one tenth of the levels prior to the onset of industrial fishing (C18.ES). Globally, the degradation of fisheries is also reflected in the fact that the fish being harvested are increasingly coming from the less valuable lower trophic levels as populations of higher trophic level species are depleted. (See Figure 1.3.) Freshwater ecosystems have been modified through the creation of dams and through the withdrawal of water for human use. The construction of dams and other structures along rivers has moderately or strongly affected flows in 60% of the large river systems in the world (C20.4.2). Water removal for human uses has reduced the flow of several major rivers, including the Nile, Yellow, and Colorado Rivers, to the extent that they do not always flow to the sea. As water flows have declined, so have sediment flows, which are the source of nutrients important for the maintenance of estuaries. Worldwide, although human activities have increased sediment flows in rivers by about 20%, reservoirs and water diversions prevent about 30% of sediments from reaching the oceans, resulting in a net reduction of sediment delivery to estuaries of roughly 10% (C19.ES). Within terrestrial ecosystems, more than two thirds of the area of 2 of the world’s 14 major terrestrial biomes (temperate grasslands and Mediterranean forests) and more than half of the area of 4 other biomes (tropical dry forests, temperate broadleaf forests, tropical grassland, and flooded grasslands) had been converted (primarily to agriculture) by 1990, as Figure 1.3 indicated. Among the major biomes, only tundra and boreal forests show negligible levels of loss and conversion, although they have begun to be affected by climate change. Globally, the rate of conversion of ecosystems has begun to slow largely due to reductions in the rate of expansion of cultivated land, and in some regions (particularly in temperate zones) ecosystems are returning to conditions and species compositions similar to their pre-conversion states. Yet rates of ecosystem conversion remain high or are increasing for specific ecosystems and regions. Under the aegis of the MA, the first systematic examination of the status and trends in terrestrial and coastal land cover was carried out using global and regional datasets. The pattern of deforestation, afforestation, and dryland degradation between 1980 and 2000 is shown in Figure 1.4. Opportunities for further expansion of cultivation are diminishing in many regions of the world as most of the land well-suited for intensive agriculture has been converted to cultivation (C26. ES). Increased agricultural productivity is also diminishing the need for agricultural expansion. As a result of these two factors, a greater fraction of land in cultivated systems (areas with at least 30% of land cultivated) is actually being cultivated, the intensity of cultivation of land is increasing, fallow lengths are decreasing, and management practices are shifting from monocultures to polycultures. Since 1950, cropland areas have stabilized in North America and decreased in Europe and China (C26.1.1). Cropland areas in the Former Soviet Union have decreased since 1960 (C26.1.1). Within temperate and boreal zones, forest cover increased by approximately 2.9 million hectares per year in the 1990s, of which approximately 40% was forest plantations (C21.4.2). In some cases, rates of conversion of ecosystems have apparently slowed because most of the ecosystem has now been converted, as is the case with temperate broadleaf forests and Mediterranean forests (C4.4.3) Ecosystem processes, including water, nitrogen, carbon, and phosphorus cycling, changed more rapidly in the second half of the twentieth century than at any time in recorded human history. Human modifications of ecosystems have changed not only the structure of the systems (such as what habitats or species are present in a particular location), but their processes and functioning as well. The capacity of ecosystems to provide services derives directly from the operation of natural biogeochemical cycles that in some cases have been significantly modified. - Water Cycle: Water withdrawals from rivers and lakes for irrigation or for urban or industrial use doubled between 1960 and 2000 (C7.2.4). (Worldwide, 70% of water use is for agriculture (C7.2.2).) Large reservoir construction has doubled or tripled the residence time of river water—the average time, that is, that a drop of water takes to reach the sea (C7.3.2). Globally, humans use slightly more than 10% of the available renewable freshwater supply through household, agricultural, and industrial activities (C7.2.3), although in some regions such as the Middle East and North Africa, humans use 120% of renewable supplies (the excess is obtained through the use of groundwater supplies at rates greater than their rate of recharge) (C7.2.2). - Carbon Cycle: Since 1750, the atmospheric concentration of carbon dioxide has increased by about 34% (from about 280 parts per million to 376 parts per million in 2003) (S7.3.1). Approximately 60% of that increase (60 parts per million) has taken place since 1959. The effect of changes in terrestrial ecosystems on the carbon cycle reversed during the last 50 years. Those ecosystems were on average a net source of CO2 during the nineteenth and early twentieth centuries (primarily due to deforestation, but with contributions from degradation of agricultural, pasture, and forestlands) and became a net sink sometime around the middle of the last century (although carbon losses from land use change continue at high levels) (high certainty). Factors contributing to the growth of the role of ecosystems in carbon sequestration include afforestation, reforestation, and forest management in North America, Europe, China, and other regions; changed agriculture practices; and the fertilizing effects of nitrogen deposition and increasing atmospheric CO2 (high certainty) (C13.ES). - Nitrogen Cycle: The total amount of reactive, or biologically available, nitrogen created by human activities increased ninefold between 1890 and 1990, with most of that increase taking place in the second half of the century in association with increased use of fertilizers (S7.3.2). (See Figures 1.5 and 1.6.) A recent study of global human contributions to reactive nitrogen flows projected that flows will increase from approximately 165 teragrams of reactive nitrogen in 1999 to 270 teragrams in 2050, an increase of 64% (R9 Fig 9.1). More than half of all the synthetic nitrogen fertilizer (which was first produced in 1913) ever used on the planet has been used since 1985 (R9.2). Human activities have now roughly doubled the rate of creation of reactive nitrogen on the land surfaces of Earth (R9.2). The flux of reactive nitrogen to the oceans increased by nearly 80% from 1860 to 1990, from roughly 27 teragrams of nitrogen per year to 48 teragrams in 1990 (R9). (This change is not uniform over Earth, however, and while some regions such as Labrador and Hudson’s Bay in Canada have seen little if any change, the fluxes from more developed regions such as the northeastern United States, the watersheds of the North Sea in Europe, and the Yellow River basin in China have increased ten- to fifteenfold.) - Phosphorus Cycle: The use of phosphorus fertilizers and the rate of phosphorus accumulation in agricultural soils increased nearly threefold between 1960 and 1990, although the rate has declined somewhat since that time (S7 Fig 7.18). The current flux of phosphorus to the oceans is now triple that of background rates (approximately 22 teragrams of phosphorus per year versus the natural flux of 8 teragrams) (R9.2) A change in an ecosystem necessarily affects the species in the system, and changes in species affect ecosystem processes. The distribution of species on Earth is becoming more homogenous. By homogenous, we mean that the differences between the set of species at one location on the planet and the set at another location are, on average, diminishing. The natural process of evolution, and particularly the combination of natural barriers to migration and local adaptation of species, led to significant differences in the types of species in ecosystems in different regions. But these regional differences in the planet’s biota are now being diminished. Two factors are responsible for this trend. First, the extinction of species or the loss of populations results in the loss of the presence of species that had been unique to particular regions. Second, the rate of invasion or introduction of species into new ranges is already high and continues to accelerate apace with growing trade and faster transportation. (See Figure 1.7.) For example, a high proportion of the roughly 100 non-native species in the Baltic Sea are native to the North American Great Lakes, and 75% of the recent arrivals of about 170 non-native species in the Great Lakes are native to the Baltic Sea (S10.5). When species decline or go extinct as a result of human activities, they are replaced by a much smaller number of expanding species that thrive in human-altered environments. One effect is that in some regions where diversity has been low, the biotic diversity may actually increase—a result of invasions of non-native forms. (This is true in continental areas such as the Netherlands as well as on oceanic islands.) Across a range of taxonomic groups, either the population size or range or both of the majority of species is currently declining. Studies of amphibians globally, African mammals, birds in agricultural lands, British butterflies, Caribbean corals, and fishery species show the majority of species to be declining in range or number. Exceptions include species that have been protected in reserves, that have had their particular threats (such as overexploitation) eliminated, or that tend to thrive in landscapes that have been modified by human activity (C4.ES). Between 10% and 30% of mammal, bird, and amphibian species are currently threatened with extinction (medium to high certainty), based on IUCN–World Conservation Union criteria for threats of extinction. As of 2004, comprehensive assessments of every species within major taxonomic groups have been completed for only three groups of animals (mammals, birds, and amphibians) and two plant groups (conifers and cycads, a group of evergreen palm-like plants). Specialists on these groups have categorized species as “threatened with extinction” if they meet a set of quantitative criteria involving their population size, the size of area in which they are found, and trends in population size or area. (Under the widely used IUCN criteria for extinction, the vast majority of species categorized as “threatened with extinction” have approximately a 10% chance of going extinct within 100 years, although some long-lived species will persist much longer even though their small population size and lack of recruitment means that they have a very high likelihood of extinction.) Twelve percent of bird species, 23% of mammals, and 25% of conifers are currently threatened with extinction; 32% of amphibians are threatened with extinction, but information is more limited and this may be an underestimate. Higher levels of threat have been found in the cycads, where 52% are threatened (C4.ES). In general, freshwater habitats tend to have the highest proportion of threatened species (C4.5.2). Over the past few hundred years, humans have increased the species extinction rate by as much as 1,000 times background rates typical over the planet’s history (medium certainty) (C4.ES, C4.4.2.). (See Figure 1.8.) Extinction is a natural part of Earth’s history. Most estimates of the total number of species today lie between 5 million and 30 million, although the overall total could be higher than 30 million if poorly known groups such as deep-sea organisms, fungi, and microorganisms including parasites have more species than currently estimated. Species present today only represent 2–4% of all species that have ever lived. The fossil record appears to be punctuated by five major mass extinctions, the most recent of which occurred 65 million years ago. The average rate of extinction found for marine and mammal fossil species (excluding extinctions that occurred in the five major mass extinctions) is approximately 0.1–1 extinctions per million species per year. There are approximately 100 documented extinctions of birds, mammal, and amphibians over the past 100 years, a rate 50–500 times higher than background rates. Including possibly extinct species, the rate is more than 1,000 times higher than background rates. Although the data and techniques used to estimate current extinction rates have improved over the past two decades, significant uncertainty still exists in measuring current rates of extinction because the extent of extinctions of undescribed taxa is unknown, the status of many described species is poorly known, it is difficult to document the final disappearance of very rare species, and there are time lags between the impact of a threatening process and the resulting extinction. Genetic diversity has declined globally, particularly among cultivated species. The extinction of species and loss of unique populations has resulted in the loss of unique genetic diversity contained by those species and populations. For wild species, there are few data on the actual changes in the magnitude and distribution of genetic diversity (C4.4), although studies have documented declining genetic diversity in wild species that have been heavily exploited. In cultivated systems, since 1960 there has been a fundamental shift in the pattern of intra-species diversity in farmers’ fields and farming systems as the crop varieties planted by farmers have shifted from locally adapted and developed populations (land races) to more widely adapted varieties produced through formal breeding systems (modern varieties). Roughly 80% of wheat area in developing countries and three quarters of the rice area in Asia is planted with modern varieties (C26.2.1). (For other crops, such as maize, sorghum and millet, the proportion of area planted to modern varieties is far smaller.) The on-farm losses of genetic diversity of crops and livestock have been partially offset by the maintenance of genetic diversity in seed banks. How have ecosystem services and their uses changed? Ecosystem services are the benefits provided by ecosystems. These include provisioning services such as food, water, timber, fiber, and genetic resources; regulating services such as the regulation of climate, floods, disease, and water quality as well as waste treatment; cultural services such as recreation, aesthetic enjoyment, and spiritual fulfillment; and supporting services such as soil formation, pollination, and nutrient cycling. (See Box 2.1.) Human use of all ecosystem services is growing rapidly. Approximately 60% (15 out of 24) of the ecosystem services evaluated in this assessment (including 70% of regulating and cultural services) are being degraded or used unsustainably. (See Table 2.1.) Of 24 provisioning, cultural, and regulating ecosystem services for which sufficient information was available, the use of 20 continues to increase. The use of one service, capture fisheries, is now declining as a result of a decline in the quantity of fish, which in turn is due to excessive capture of fish in past decades. Two other services (fuelwood and fiber) show mixed patterns. The use of some types of fiber is increasing and others decreasing; in the case of fuelwood, there is evidence of a recent peak in use. Humans have enhanced production of three ecosystem services – crops, livestock, and aquaculture – through expansion of the area devoted to their production or through technological inputs. Recently, the service of carbon sequestration has been enhanced globally, due in part to the re-growth of forests in temperate regions, although previously deforestation had been a net source of carbon emissions. Half of provisioning services (6 of 11) and nearly 70% (9 of 13) of regulating and cultural services are being degraded or used unsustainably. - Provisioning Services: The quantity of provisioning ecosystem services such as food, water, and timber used by humans increased rapidly, often more rapidly than population growth although generally slower than economic growth, during the second half of the twentieth century. And it continues to grow. In a number of cases, provisioning services are being used at unsustainable rates. The growing human use has been made possible by a combination of substantial increases in the absolute amount of some services produced by ecosystems and an increase in the fraction used by humans. World population doubled between 1960 and 2000, from 3 billion to 6 billion people, and the global economy increased more than sixfold. During this time, food production increased by roughly two-and-a-half times (a 160% increase in food production between 1961 and 2003), water use doubled, wood harvests for pulp and paper tripled, and timber production increased by nearly 60% (C9.ES, C9.2.2, S7, C7.2.3, C8.1). (Food production increased fourfold in developing countries over this period.) The sustainability of the use of provisioning services differs in different locations. However, the use of several provisioning services is unsustainable even in the global aggregate. The current level of use of capture fisheries (marine and freshwater) is not sustainable, and many fisheries have already collapsed. (See Figure 2.1.) Currently, one quarter of important commercial fish stocks are overexploited or significantly depleted (high certainty) (C8.2.2). From 5% to possibly 25% of global freshwater use exceeds longterm accessible supplies and is maintained only through engineered water transfers or the overdraft of groundwater supplies (low to medium certainty) (C7.ES). Between 15% and 35% of irrigation withdrawals exceed supply rates and are therefore unsustainable (low to medium certainty) (C7.2.2). Current agricultural practices are also unsustainable in some regions due to their reliance on unsustainable sources of water, harmful impacts caused by excessive nutrient or pesticide use, salinization, nutrient depletion, and rates of soil loss that exceed rates of soil formation. - Regulating Services: Humans have substantially altered regulating services such as disease and climate regulation by modifying the ecosystem providing the service and, in the case of waste processing services, by exceeding the capabilities of ecosystems to provide the service. Most changes to regulating services are inadvertent results of actions taken to enhance the supply of provisioning services. Humans have substantially modified the climate regulation service of ecosystems—first through land use changes that contributed to increases in the amount of carbon dioxide and other greenhouse gases such as methane and nitrous oxide in the atmosphere and more recently by increasing the sequestration of carbon dioxide (although ecosystems remain a net source of methane and nitrous oxide). Modifications of ecosystems have altered patterns of disease by increasing or decreasing habitat for certain diseases or their vectors (such as dams and irrigation canals that provide habitat for schistosomiasis) or by bringing human populations into closer contact with various disease organisms. Changes to ecosystems have contributed to a significant rise in the number of floods and major wildfires on all continents since the 1940s. Ecosystems serve an important role in detoxifying wastes introduced into the environment, but there are intrinsic limits to that waste processing capability. For example, aquatic ecosystems “cleanse” on average 80% of their global incident nitrogen loading, but this intrinsic self-purification capacity varies widely and is being reduced by the loss of wetlands (C7.2.5). - Cultural Services: Although the use of cultural services has continued to grow, the capability of ecosystems to provide cultural benefits has been significantly diminished in the past century (C17). Human cultures are strongly influenced by ecosystems, and ecosystem change can have a significant impact on cultural identity and social stability. Human cultures, knowledge systems, religions, heritage values, social interactions, and the linked amenity services (such as aesthetic enjoyment, recreation, artistic and spiritual fulfillment, and intellectual development) have always been influenced and shaped by the nature of the ecosystem and ecosystem conditions. Many of these benefits are being degraded, either through changes to ecosystems (a recent rapid decline in the numbers of sacred groves and other such protected areas, for example) or through societal changes (such as the loss of languages or of traditional knowledge) that reduce people’s recognition or appreciation of those cultural benefits. Rapid loss of culturally valued ecosystems and landscapes can contribute to social disruptions and societal marginalization. And there has been a decline in the quantity and quality of aesthetically pleasing natural landscapes. Box 2.1. Ecosystem Services Ecosystem services are the benefits people obtain from ecosystems. These include provisioning, regulating, and cultural services that directly affect people and the supporting services needed to maintain other services (CF2). Many of the services listed here are highly interlinked. (Primary production, photosynthesis, nutrient cycling, and water cycling, for example, all involve different aspects of the same biological processes.) These are the products obtained from ecosystems, including: Food. This includes the vast range of food products derived from plants, animals, and microbes. Fiber. Materials included here are wood, jute, cotton, hemp, silk, and wool. Fuel. Wood, dung, and other biological materials serve as sources of energy. Genetic resources. This includes the genes and genetic information used for animal and plant breeding and biotechnology. Biochemicals, natural medicines, and pharmaceuticals. Many medicines, biocides, food additives such as alginates, and biological materials are derived from ecosystems. Ornamental resources. Animal and plant products, such as skins, shells, and flowers, are used as ornaments, and whole plants are used for landscaping and ornaments. Fresh water. People obtain fresh water from ecosystems and thus the supply of fresh water can be considered a provisioning service. Fresh water in rivers is also a source of energy. Because water is required for other life to exist, however, it could also be considered a supporting service. These are the benefits obtained from the regulation of ecosystem processes, including: Air quality regulation. Ecosystems both contribute chemicals to and extract chemicals from the atmosphere, influencing many aspects of air quality. Climate regulation. Ecosystems influence climate both locally and globally. At a local scale, for example, changes in land cover can affect both temperature and precipitation. At the global scale, ecosystems play an important role in climate by either sequestering or emitting greenhouse gases. Water regulation. The timing and magnitude of runoff, flooding, and aquifer recharge can be strongly influenced by changes in land cover, including, in particular, alterations that change the water storage potential of the system, such as the conversion of wetlands or the replacement of forests with croplands or croplands with urban areas. Erosion regulation. Vegetative cover plays an important role in soil retention and the prevention of landslides. Water purification and waste treatment. Ecosystems can be a source of impurities (for instance, in fresh water) but also can help filter out and decompose organic wastes introduced into inland waters and coastal and marine ecosystems and can assimilate and detoxify compounds through soil and subsoil processes. Disease regulation. Changes in ecosystems can directly change the abundance of human pathogens, such as cholera, and can alter the abundance of disease vectors, such as mosquitoes. Pest regulation. Ecosystem changes affect the prevalence of crop and livestock pests and diseases. Pollination. Ecosystem changes affect the distribution, abundance, and effectiveness of pollinators. These are the non-material benefits people obtain from ecosystems through spiritual enrichment, cognitive development, reflection, recreation, and aesthetic experiences, including: Cultural diversity. The diversity of ecosystems is one factor influencing the diversity of cultures. Knowledge systems (traditional and formal). Ecosystems influence the types of knowledge systems developed by different cultures. Educational values. Ecosystems and their components and processes provide the basis for both formal and informal education in many societies. Inspiration. Ecosystems provide a rich source of inspiration for art, folklore, national symbols, architecture, and advertising. Aesthetic values. Many people find beauty or aesthetic value in various aspects of ecosystems, as reflected in the support for parks, scenic drives, and the selection of housing locations. Social relations. Ecosystems influence the types of social relations that are established in particular cultures. Fishing societies, for example, differ in many respects in their social relations from nomadic herding or agricultural societies. Sense of place. Many people value the “sense of place” that is associated with recognized features of their environment, including aspects of the ecosystem. Cultural heritage values. Many societies place high value on the maintenance of either historically important landscapes (“cultural landscapes”) or culturally significant species. Recreation and ecotourism. People often choose where to spend their leisure time based in part on the characteristics of the natural or cultivated landscapes in a particular area. Supporting services are those that are necessary for the production of all other ecosystem services. They differ from provisioning, regulating, and cultural services in that their impacts on people are often indirect or occur over a very long time, whereas changes in the other categories have relatively direct and short-term impacts on people. (Some services, like erosion regulation, can be categorized as both a supporting and a regulating service, depending on the time scale and immediacy of their impact on people.) These services include: Soil Formation. Because many provisioning services depend on soil fertility, the rate of soil formation influences human well-being in many ways. Primary production. The assimilation or accumulation of energy and nutrients by organisms. Nutrient cycling. Approximately 20 nutrients essential for life, including nitrogen and phosphorus, cycle through ecosystems and are maintained at different concentrations in different parts of ecosystems. Water cycling. Water cycles through ecosystems and is essential for living organisms. Global gains in the supply of food, water, timber, and other provisioning services were often achieved in the past century despite local resource depletion and local restrictions on resource use by shifting production and harvest to new underexploited regions, sometimes considerable distances away. These options are diminishing. This trend is most distinct in the case of marine fisheries. As individual stocks have been depleted, fishing pressure has shifted to less exploited stocks (C18.2.1). Industrial fishing fleets have also shifted to fishing further offshore and in deeper water to meet global demand (C18.ES). (See Figure 2.2.) A variety of drivers related to market demand, supply, and government policies have influenced patterns of timber harvest. For example, international trade in forest products increases when a nation’s forests no longer can meet demand or when policies have been established to restrict or ban timber harvest. Source: Millennium Ecosystem Assessment Although human demand for ecosystem services continues to grow in the aggregate, the demand for particular services in specific regions is declining as substitutes are developed For example, kerosene, electricity, and other energy sources are increasingly being substituted for fuelwood (still the primary source of energy for heating and cooking for some 2.6 billion people) (C9.ES). The substitution of a variety of other materials for wood (such as vinyl, plastics, and metal) has contributed to relatively slow growth in global timber consumption in recent years (C9.2.1). While the use of substitutes can reduce pressure on specific ecosystem services, this may not always have positive net environmental benefits. Substitution of fuelwood by fossil fuels, for example, reduces pressure on forests and lowers indoor air pollution, but it may increase net greenhouse gas emissions. Substitutes are also often costlier to provide than the original ecosystem services. Both the supply and the resilience of ecosystem services are affected by changes in biodiversity. Biodiversity is the variability among living organisms and the ecological complexes of which they are part. When a species is lost from a particular location (even if it does not go extinct globally) or introduced to a new location, the various ecosystem services associated with that species are changed. More generally, when a habitat is converted, an array of ecosystem services associated with the species present in that location is changed, often with direct and immediate impacts on people (S10). Changes in biodiversity also have numerous indirect impacts on ecosystem services over longer time periods, including influencing the capacity of ecosystems to adjust to changing environments (medium certainty), causing disproportionately large and sometimes irreversible changes in ecosystem processes, influencing the potential for infectious disease transmission, and, in agricultural systems, influencing the risk of crop failure in a variable environment and altering the potential impacts of pests and pathogens (medium to high certainty) (C11.ES, C14.ES). The modification of an ecosystem to alter one ecosystem service (to increase food or timber production, for instance) generally results in changes to other ecosystem services as well (CWG, SG7). Trade-offs among ecosystem services are commonplace. (See Table 2.2.) For example, actions to increase food production often involve one or more of the following: increased water use, degraded water quality, reduced biodiversity, reduced forest cover, loss of forest products, or release of greenhouse gases. Frequent cultivation, irrigated rice production, livestock production, and burning of cleared areas and crop residues now release 1,600±800 million tons of carbon per year in CO2 (C26. ES). Cultivation, irrigated rice production, and livestock production release between 106 million and 201 million tons of carbon per year in methane (C13 Table 13.1). About 70% of anthropogenic nitrous oxide gas emissions are attributable to agriculture, mostly from land conversion and nitrogen fertilizer use (C26. ES). Similarly, the conversion of forest to agriculture can significantly change flood frequency and magnitude, although the amount and direction of this impact is highly dependent on the characteristics of the local ecosystem and the nature of the land cover change (C21.5.2). Many trade-offs associated with ecosystem services are expressed in areas remote from the site of degradation. For example, conversion of forests to agriculture can affect water quality and flood frequency downstream of where the ecosystem change occurred. And increased application of nitrogen fertilizers to croplands can have negative impacts on coastal water quality. These trade-offs are rarely taken fully into account in decision-making, partly due to the sectoral nature of planning and partly because some of the effects are also displaced in time (such as long-term climate impacts). The net benefits gained through actions to increase the productivity or harvest of ecosystem services have been less than initially believed after taking into account negative trade-offs. The benefits of resource management actions have traditionally been evaluated only from the standpoint of the service targeted by the management intervention. However, management interventions to increase any particular service almost always result in costs to other services. Negative trade-offs are commonly found between individual provisioning services and between provisioning services and the combined regulating, cultural, and supporting services and biodiversity. Taking the costs of these negative trade-offs into account reduces the apparent benefits of the various management interventions. For example: - Expansion of commercial shrimp farming has had serious impacts on ecosystems, including loss of vegetation, deterioration of water quality, decline of capture fisheries, and loss of biodiversity (R6, C19). - Expansion of livestock production around the world has often led to overgrazing and dryland degradation, rangeland fragmentation, loss of wildlife habitat, dust formation, bush encroachment, deforestation, nutrient overload through disposal of manure, and greenhouse gas emissions (R6.ES). - Poorly designed and executed agricultural policies led to an irreversible change in the Aral Sea ecosystem. By 1998, the Aral Sea had lost more than 60% of its area and approximately 80% of its volume, and ecosystem-related problems in the region now include excessive salt content of major rivers, contamination of agricultural products with agrochemicals, high levels of turbidity in major water sources, high levels of pesticides and phenols in surface waters, loss of soil fertility, extinctions of species, and destruction of commercial fisheries (R6 Box 6.9). - Forested riparian wetlands adjacent to the Mississippi River in the United States had the capacity to store about 60 days of river discharge. With the removal of the wetlands through canalization, leveeing, and draining, the remaining wetlands have a storage capacity of less than 12 days discharge, an 80% reduction in flood storage capacity (C16.1.1). However, positive synergies can be achieved as well when actions to conserve or enhance a particular component of an ecosystem or its services benefit other services or stakeholders. Agroforestry can meet human needs for food and fuel, restore soils, and contribute to biodiversity conservation. Intercropping can increase yields, increase biocontrol, reduce soil erosion, and reduce weed invasion in fields. Urban parks and other urban green spaces provide spiritual, aesthetic, educational, and recreational benefits as well as such services such as water purification, wildlife habitat, waste management, and carbon sequestration. Protection of natural forests for biodiversity conservation can also reduce carbon emissions and protect water supplies. Protection of wetlands can contribute to flood control and also help to remove pollutants such as phosphorus and nitrogen from the water. For example, it is estimated that the nitrogen load from the heavily polluted Illinois River basin to the Mississippi River could be cut in half by converting 7% of the basin back to wetlands (R9.4.5). Positive synergies often exist among regulating, cultural, and supporting services and with biodiversity conservation. How have ecosystem changes affected human well-being and poverty alleviation? Relationships between Ecosystem Services and Human Well-being Changes in ecosystem services influence all components of human well-being, including the basic material needs for a good life, health, good social relations, security, and freedom of choice and action (CF3). (See Box 3.1.) Humans are fully dependent on Earth’s ecosystems and the services that they provide, such as food, clean water, disease regulation, climate regulation, spiritual fulfillment, and aesthetic enjoyment. The relationship between ecosystem services and human well-being is mediated by access to manufactured, human, and social capital. Human well-being depends on ecosystem services but also on the supply and quality of social capital, technology, and institutions. These factors mediate the relationship between ecosystem services and human well-being in ways that remain contested and incompletely understood. The relationship between human well- being and ecosystem services is not linear. When an ecosystem service is abundant relative to the demand, a marginal increase in ecosystem services generally contributes only slightly to human well-being (or may even diminish it). But when the service is relatively scarce, a small decrease can substantially reduce human well-being (S.SDM, SG3.4). Ecosystem services contribute significantly to global employment and economic activity. The ecosystem service of food production contributes by far the most to economic activity and employment. In 2000, the market value of food production was $981 billion, or roughly 3% of gross world product, but it is a much higher share of GDP within developing countries (C8 Table 8.1). That year, for example, agriculture (including forestry and fishing) represented 24% of total GDP in countries with per capita incomes less than $765 (the low-income developing countries, as defined by the World Bank) (C26.5.1). The agricultural labor force contained 1.3 billion people globally—approximately a fourth (22%) of the world’s population and half (46%) of the total labor force—and some 2.6 billion people, more than 40% of the world, lived in agriculturally based households (C26.5.1). Significant differences exist between developing and industrial countries in these patterns. For example, in the United States only 2.4% of the labor force works in agriculture. Other ecosystem services (or commodities based on ecosystem services) that make significant contributions to national economic activity include timber (around $400 billion), marine fisheries (around $80 billion in 2000), marine aquaculture ($57 billion in 2000), recreational hunting and fishing ($50 billion and $24–37 billion annually respectively in the United States alone), as well as edible forest products, botanical medicines, and medicinal plants (C9.ES, C18.1, C20.ES). And many other industrial products and commodities rely on ecosystem services such as water as inputs. The degradation of ecosystem services represents a loss of a capital asset (C5.4.1). (See Figure 3.1.) Both renewable resources such as ecosystem services and nonrenewable resources such as mineral deposits, soil nutrients, and fossil fuels are capital assets. Yet traditional national accounts do not include measures of resource depletion or of the degradation of renewable resources. As a result, a country could cut its forests and deplete its fisheries, and this would show only as a positive gain to GDP despite the loss of the capital asset. Moreover, many ecosystem services are available freely to those who use them (fresh water in aquifers, for instance, or the use of the atmosphere as a sink for pollutants), and so again their degradation is not reflected in standard economic measures. When estimates of the economic losses associated with the depletion of natural assets are factored into measurements of the total wealth of nations, they significantly change the balance sheet of those countries with economies especially dependent on natural resources. For example, countries such as Ecuador, Ethiopia, Kazakhstan, Republic of Congo, Trinidad and Tobago, Uzbekistan, and Venezuela that had positive growth in net savings (reflecting a growth in the net wealth of the country) in 2001 actually experienced a loss in net savings when depletion of natural resources (energy and forests) and estimated damages from carbon emissions (associated with contributions to climate change) were factored into the accounts. In 2001, in 39 countries out of the 122 countries for which sufficient data were available, net national savings (expressed as a percent of gross national income) were reduced by at least 5% when costs associated with the depletion of natural resources (unsustainable forestry, depletion of fossil fuels) and damage from carbon emissions were included. Box 3.1. Linkages between Ecosystem Services and Human Well-being Human well-being has five main components: the basic material needs for a good life, health, good social relations, security, and freedom of choice and action. (See Box Figure A.) This last component is influenced by other constituents of well-being (as well as by other factors including, notably, education) and is also a precondition for achieving other components of well-being, particularly with respect to equity and fairness. Human well-being is a continuum— from extreme deprivation, or poverty, to a high attainment or experience of well-being. Ecosystems underpin human well-being through supporting, provisioning, regulating, and cultural services. Well-being also depends on the supply and quality of human services, technology, and institutions. Basic Materials for a Good Life This refers to the ability to have a secure and adequate livelihood, including income and assets, enough food and water at all times, shelter, ability to have energy to keep warm and cool, and access to goods. Changes in provisioning services such as food, water, and fuelwood have very strong impacts on the adequacy of material for a good life. Access to these materials is heavily mediated by socioeconomic circumstances. For the wealthy, local changes in ecosystems may not cause a significant change in their access to necessary material goods, which can be purchased from other locations, sometimes at artificially low prices if governments provide subsidies (for example, water delivery systems). Changes in regulating services influencing water supply, pollination and food production, and climate have very strong impacts on this element of human well-being. These, too, can be mediated by socioeconomic circumstances, but to a smaller extent. Changes in cultural services have relatively weak linkages to material elements of well-being. Changes in supporting services have a strong influence by virtue of their influence on provisioning and regulating services. The following are some examples of material components of well-being affected by ecosystem change. By health, we refer to the ability of an individual to feel well and be strong, or in other words to be adequately nourished and free from disease, to have access to adequate and clean drinking water and clean air, and to have the ability to have energy to keep warm and cool. Human health is both a product and a determinant of well-being. Changes in provisioning services such as food, water, medicinal plants, and access to new medicines and changes in regulating services that influence air quality, water quality, disease regulation, and waste treatment also have very strong impacts on health. Changes in cultural services can have strong influences on health, since they affect spiritual, inspirational, aesthetic, and recreational opportunities, and these in turn affect both physical and emotional states. Changes in supporting services have a strong influence on all of the other categories of services. These benefits are moderately mediated by socioeconomic circumstances. The wealthy can purchase substitutes for some health benefits of ecosystems (such as medicinal plants or water quality), but they are more susceptible to changes affecting air quality. The following are some examples of health components of well-being affected by ecosystem change. Good Social Relations Good social relations refer to the presence of social cohesion, mutual respect, and the ability to help others and provide for children. Changes in provisioning and regulating ecosystem services can affect social relations, principally through their more direct impacts on material well-being, health, and security. Changes in cultural services can have a strong influence on social relations, particularly in cultures that have retained strong connections to local environments. Changes in provisioning and regulating services can be mediated by socioeconomic factors, but those in cultural services cannot. Even a wealthy country like Sweden or the United Kingdom cannot readily purchase a substitute to a cultural landscape that is valued by the people in the community. Changes in ecosystems have tended to increase the accessibility that people have to ecosystems for recreation and ecotourism. There are clear examples of declining ecosystem services disrupting social relations or resulting in conflicts. Indigenous societies whose cultural identities are tied closely to particular habitats or wildlife suffer if habitats are destroyed or wildlife populations decline. Such impacts have been observed in coastal fishing communities, Arctic populations, traditional forest societies, and pastoral nomadic societies (C5.4.4). By security, we refer to safety of person and possessions, secure access to necessary resources, and security from natural and human-made disasters. Changes in regulating services such as disease regulation, climate regulation, and flood regulation have very strong influences on security. Changes in provisioning services such as food and water have strong impacts on security, since degradation of these can lead to loss of access to these essential resources. Changes in cultural services can influence security since they can contribute to the breakdown or strengthening of social networks within society. Changes in supporting services have a strong influence by virtue of their influence on all the other categories of services. These benefits are moderately mediated by socioeconomic circumstances. The wealthy have access to some safety nets that can minimize the impacts of some ecosystem changes (such as flood or drought insurance). Nevertheless, the wealthy cannot entirely escape exposure to some of these changes in areas where they live. One example of an aspect of security affected by ecosystem change involves influences on the severity and magnitude of floods and major fires. The incidence of these has increased significantly over the past 50 years. Changes in ecosystems and in the management of ecosystems have contributed to these trends. The canalization of rivers, for example, tends to decrease the incidence and impact of small flood events and increase the incidence and severity of large ones. On average, 140 million people are affected by floods each year—more than all other natural or technological disasters put together. Between 1990 and 1999, more than 100,000 people were killed in floods, which caused a total of $243 billion in damages (C7.4.4). Freedom of Choice and Action Freedom of choice and action refers to the ability of individuals to control what happens to them and to be able to achieve what they value doing or being. Freedom and choice cannot exist without the presence of the other elements of well-being, so there is an indirect influence of changes in all categories of ecosystem services on the attainment of this constituent of well-being. The influence of ecosystem change on freedom and choice is heavily mediated by socioeconomic circumstances. The wealthy and people living in countries with efficient governments and strong civil society can maintain freedom and choice even in the face of significant ecosystem change, while this would be impossible for the poor if, for example, the ecosystem change resulted in a loss of livelihood. In the aggregate, the state of our knowledge about the impact that changing ecosystem conditions have on freedom and choice is relatively limited. Declining provision of fuelwood and drinking water have been shown to increase the amount of time needed to collect such basic necessities, which in turn reduces the amount of time available for education, employment, and care of family members. Such impacts are typically thought to be disproportionately experienced by women (although the empirical foundation for this view is relatively limited) (C5.4.2). The degradation of ecosystem services often causes significant harm to human well-being(C5 Box 5.2). The information available to assess the consequences of changes in ecosystem services for human well-being is relatively limited. Many ecosystem services have not been monitored and it is also difficult to estimate the relative influence of changes in ecosystem services in relation to other social, cultural, and economic factors that also affect human well-being. Nevertheless, the following evidence demonstrates that the harmful effects of the degradation of ecosystem services on livelihoods, health, and local and national economies are substantial. - Most resource management decisions are most strongly influenced by ecosystem services entering markets; as a result, the non-marketed benefits are often lost or degraded. Many ecosystem services, such as the purification of water, regulation of floods, or provision of aesthetic benefits, do not pass through markets. The benefits they provide to society, therefore, are largely unrecorded: only a portion of the total benefits provided by an ecosystem make their way into statistics, and many of these are mis-attributed (the water regulation benefits of wetlands, for example, do not appear as benefits of wetlands but as higher profits in water-using sectors). Moreover, for ecosystem services that do not pass through markets there is often insufficient incentive for individuals to invest in maintenance (although in some cases common property management systems provide such incentives). Typically, even if individuals are aware of the services provided by an ecosystem, they are neither compensated for providing these services nor penalized for reducing them. These non-marketed benefits are often high and sometimes more valuable than the marketed benefits. For example: - Total economic value of forests. One of the most comprehensive studies to date, which examined the marketed and non-marketed economic values associated with forests in eight Mediterranean countries, found that timber and fuelwoodgenerally accounted for less than a third of total economic value in each country. (See Figure 3.2.) - Recreational benefits of protected areas: The annual recreational value of the coral reefs of each of six Marine Management Areas in the Hawaiian Islands in 2003 ranged from $300,000 to $35 million. - Water quality: The net present value in 1998 of protecting water quality in the 360-kilometer Catawba River in the United States for five years was estimated to be $346 million. - Water purification service of wetlands: About half of the total economic value of the Danube River Floodplain in 1992 could be accounted for in its role as a nutrient sink. - Native pollinators: A study in Costa Rica found that forest- based pollinators increased coffee yields by 20% within 1 kilometer of the forest (as well as increasing the quality of the coffee). During 2000–03, pollination services from two forest fragments (of 46 and 111 hectares) thus increased the income of a 1,100-hectare farm by $60,000 a year, a value commensurate with expected revenues from competing land uses. - Flood control: Muthurajawela Marsh, a 3,100-hectare coastal peat bog in Sri Lanka, provides an estimated $5 mil- lion in annual benefits ($1,750 per hectare) through its role in local flood control. - The total economic value associated with managing ecosystems more sustainably is often higher than the value associated with the conversion of the ecosystem through farming, clear-cut logging, or other intensive uses. Relatively few studies have compared the total economic value (including values of both marketed and non-marketed ecosystem services) of ecosystems under alternate management regimes, but a number of studies that do exist have found that the benefit of managing the ecosystem more sustainably exceeded that of converting the ecosystem (see Figure 3.3), although the private benefits—that is, the actual monetary benefits captured from the services entering the market—would favor conversion or unsustainable management. These studies are consistent with the understanding that market failures associated with ecosystem services lead to greater conversion of ecosystems than is economically justified. However, this finding would not hold at all locations. For example, the value of conversion of an ecosystem in areas of prime agricultural land or in urban regions often exceeds the total economic value of the intact ecosystem. (Although even in dense urban areas, the total economic value of maintaining some “green space” can be greater than development of these sites.) - The economic and public health costs associated with damage to ecosystem services can be substantial. - The early 1990s collapse of the Newfoundland cod fishery due to overfishing (see Figure 3.4) resulted in the loss of tens of thousands of jobs and has cost at least $2 billion in income support and retraining. - The cost of U.K. agriculture in 1996 resulting from the damage that agricultural practices cause to water (pollution, eutrophication), air (emissions of greenhouse gases), soil (off-site erosion damage, carbon dioxide loss), and biodiversity was $2.6 billion, or 9% of average yearly gross farm receipts for the 1990s. Similarly, the damage costs of freshwater eutrophication alone in England and Wales was estimated to be $105–160 million per year in the 1990s, with an additional $77 million per year being spent to address those damages. - The burning of 10 million hectares of Indonesia’s forests in 1997/98 cost an estimated $9.3 billion in increased health care, lost production, and lost tourism revenues and affected some 20 million people across the region. - The total damages for the Indian Ocean region over 20 years (with a 10% discount rate) resulting from the long-term impacts of the massive 1998 coral bleaching episode are estimated to be between $608 million (if there is only a slight decrease in tourism-generated income and employment results) and $8 billion (if tourism income and employment and fish productivity drop significantly and reefs cease to function as a protective barrier). - The net annual loss of economic value associated with invasive species in the fynbos vegetation of the Cape Floral region of South Africa in 1997 was estimated to be $93.5 million, equivalent to a reduction of the potential economic value without the invasive species of more than 40%. The invasive species have caused losses of biodiversity, water, soil, and scenic beauty, although they also provide some benefits, such as provision of firewood. - The incidence of diseases of marine organisms and emergence of new pathogens is increasing, and some of these, such as ciguatera, harm human health (C19.3.1). Episodes of harmful (including toxic) algal blooms in coastal waters are increasing in frequency and intensity, harming other marine resources such as fisheries and harming human health (R16 Figure 16.3). In a particularly severe outbreak in Italy in 1989, harmful algal blooms cost the coastal aquaculture industry $10 million and the Italian tourism industry $11.4 million (C19.3.1). - The number of both floods and fires has increased significantly, in part due to ecosystem changes, in the past 50 years. Examples are the increased susceptibility of coastal populations to tropical storms when mangrove forests are cleared and the increase in downstream flooding that followed land use changes in the upper Yangtze River (C.SDM). Annual economic losses from extreme events increased tenfold from the 1950s to approximately $70 billion in 2003, of which natural catastrophes—floods, fires, storms, drought, and earthquakes—accounted for 84% of insured losses. - Significant investments are often needed to restore or maintain non-marketed ecosystem services. - In South Africa, invasive tree species threaten both native species and water flows by encroaching into natural habitats, with serious impacts for economic growth and human well-being. In response, the South African government established the “Working for Water Programme.” Between 1995 and 2001 the program invested $131 million (at 2001 exchange rates) in clearing programs to control the invasive species. - The state of Louisiana has put in place a $14-billion wetland restoration plan to protect 10,000 square kilometers of marsh, swamp, and barrier islands in part to reduce storm surges generated by hurricanes. Although degradation of ecosystem services could be significantly slowed or reversed if the full economic value of the services were taken into account in decision-making, economic considerations alone would likely lead to lower levels of biodiversity (medium certainty) (CWG). Although most or all biodiversity has some economic value (the option value of any species is always greater than zero), that does not mean that the protection of all biodiversity is always economically justified. Other utilitarian benefits often “compete” with the benefits of maintaining greater diversity. For example, many of the steps taken to increase the production of ecosystem services involve the simplification of natural systems. (Agriculture, for instance, typically has involved the replacement of relatively diverse systems with more simplified production systems.) And protecting some other ecosystem services may not necessarily require the conservation of biodiversity. (For example, a forested watershed could provide clean water whether it was covered in a diverse native forest or in a single-species plantation.) Ultimately, the level of biodiversity that survives on Earth will be determined not just by utilitarian considerations but to a significant extent by ethical concerns, including considerations of the intrinsic values of species. Even wealthy populations cannot be fully insulated from the degradation of ecosystem services (CWG). The degradation of ecosystem services influences human well-being in industrial regions as well as wealthy populations in developing countries. - The physical, economic, or social impacts of ecosystem service degradation may cross boundaries. (See Figure 3.5.) Land degradation and fires in poor countries, for example, have contributed to air quality degradation (dust and smoke) in wealthy ones. - Degradation of ecosystem services exacerbates poverty in developing countries, which can affect neighboring industrial countries by slowing regional economic growth and contributing to the outbreak of conflicts or the migration of refugees. - Changes in ecosystems that contribute to greenhouse gas emissions contribute to global climate changes that affect all countries. - Many industries still depend directly on ecosystem services. The collapse of fisheries, for example, has harmed many communities in industrial countries. Prospects for the forest, agriculture, fishing, and ecotourism industries are all directly tied to ecosystem services, while other sectors such as insurance, banking, and health are strongly, if less directly, influenced by changes in ecosystem services. - Wealthy populations are insulated from the harmful effects of some aspects of ecosystem degradation, but not all. For example, substitutes are typically not available when cultural services are lost. While traditional natural resource sectors such as agriculture, forestry, and fisheries are still important in industrial-country economies, the relative economic and political significance ofother sectors has grown as a result of the ongoing transition from agricultural to industrial and service economies (S7). Over the past two centuries, the economic structure of the world’s largest economies has shifted significantly from agricultural production to industry and, in particular, to service industries. (See Figure 3.6.) These changes increase the relative significance of the industrial and service sectors (using conventional economic measures that do not factor in non-marketed costs and benefits) in comparison to agriculture, forestry, and fisheries, although natural resource–based sectors often still dominate in developing countries. In 2000, agriculture accounted for 5% of gross world product, industry 31%, and service industries 64%. At the same time, the importance of other non-marketed ecosystem services has grown, although many of the benefits provided by these services are not captured in national economic statistics. The economic value of water from forested ecosystems near urban populations, for example, now sometimes exceeds the value of timber in those ecosystems. Economic and employment contributions from ecotourism, recreational hunting, and fishing have all grown. Increased trade has often helped meet growing demand for ecosystem services such as grains, fish, and timber in regions where their supply is limited. While this lessens pressures on ecosystem services within the importing region, it increases pressures in the exporting region. Fish products are heavily traded, and approximately 50% of exports are from developing countries. Exports from these nations and the Southern Hemisphere presently offset much of the shortfall of supply in European, North American, and East Asian markets (C18.ES). Trade has increased the quantity and quality of fish supplied to wealthy countries, in particular the United States, those in Europe, and Japan, despite reductions in marine fish catch (C18.4.1). The value of international trade in forest products has increased much faster than increases in harvests. (Roundwood harvests grew by 60% between 1961 and 2000, while the value of international timber trade increased twenty-five-fold (C9.ES).) The United States, Germany, Japan, United Kingdom, and Italy were the destination of more than half of the imports in 2000, while Canada, United States, Sweden, Finland, and Germany account for more than half of the exports. Trade in commodities such as grain, fish, and timber is accompanied by a “virtual trade” in other ecosystem services that are required to support the production of these commodities. Globally, the international virtual water trade in crops has been estimated between 500 and 900 cubic kilometers per year, and 130–150 cubic kilometers per year is traded in livestock and livestock products. For comparison, current rates of water consumption for irrigation total 1,200 cubic kilometers per year (C7.3.2). Changes in ecosystem services affect people living in urban ecosystems both directly and indirectly. Likewise, urban populations have strong impacts on ecosystem services both in the local vicinity and at considerable distances from urban centers (C27). Almost half of the world’s population now lives in urban areas, and this proportion is growing. Urban development often threatens the availability of water, air and water quality, waste processing, and many other qualities of the ambient environment that contribute to human well-being, and this degradation is particularly threatening to vulnerable groups such as poor people. A wide range of ecosystem services are still important to livelihoods. For example, agriculture practiced within urban boundaries contributes to food security in urban sub-Saharan Africa. Urban populations affect distant ecosystems through trade and consumption and are affected by changes in distant ecosystems that affect the local availability or price of commodities, air or water quality, or global climate, or that affect socioeconomic conditions in those countries in ways that influence the economy, demographic, or security situation in distant urban areas. Spiritual and cultural values of ecosystems are as important as other services for many local communities. Human cultures, knowledge systems, religions, heritage values, and social interactions have always been influenced and shaped by the nature of the ecosystem and ecosystem conditions in which culture is based. People have benefited in many ways from cultural ecosystem services, including aesthetic enjoyment, recreation, artistic and spiritual fulfillment, and intellectual development (C17.ES). Several of the MA sub-global assessments highlighted the importance of these cultural services and spiritual benefits to local communities (SG.SDM). For example, local villages in India preserve selected sacred groves of forest for spiritual reasons, and urban parks provide important cultural and recreational services in cities around the world. Ecosystem Services, Millennium Development Goals, and Poverty Reduction The degradation of ecosystem services poses a significant barrier to the achievement of the Millennium Development Goals and to the MDG targets for 2015. (See Box 3.2.) Many of the regions facing the greatest challenges in achieving the MDGs overlap with the regions facing the greatest problems related to the sustainable supply of ecosystem services (R19.ES). Among other regions, this includes sub-Saharan Africa, Central Asia, and parts of South and Southeast Asia as well as some regions in Latin America. Sub-Saharan Africa has experienced increases in maternal deaths and income poverty (those living on less than $1 a day), and the number of people living in poverty there is forecast to rise from 315 million in 1999 to 404 million by 2015 (R19.1). Per capita food production has been declining in southern Africa, and relatively little gain is projected in the MA scenarios. Many of these regions include large areas of drylands, in which a combination of growing populations and land degradation are increasing the vulnerability of people to both economic and environmental change. In the past 20 years, these same regions have experienced some of the highest rates of forest and land degradation in the world. Box 3.2. Ecosystems and the Millennium Development Goals The eight Millennium Development Goals were endorsed by governments at the United Nations in September 2000. The MDGs aim to improve human well-being by reducing poverty, hunger, and child and maternal mortality; ensuring education for all; controlling and managing diseases; tackling gender disparity; ensuring sustainable development; and pursuing global partnerships. For each MDG, governments have agreed to between 1 and 8 targets (a total of 15 targets) that are to be achieved by 2015. Slowing or reversing the degradation of ecosystem services will contribute significantly to the achievement of many of the MDGs. Despite the progress achieved in increasing the production and use of some ecosystem services, levels of poverty remain high, inequities are growing, and many people still do not have a sufficient supply of or access to ecosystem services (C5). - In 2001, some 1.1 billion people survived on less than $1 per day of income, most of them (roughly 70%) in rural areas where they are highly dependent on agriculture, grazing, and hunting for subsistence (R19.2.1). - Inequality in income and other measures of human well-being has increased over the past decade (C5.ES). A child born in sub-Saharan Africa is 20 times more likely to die before age five than a child born in an industrial country, and this ratio is higher than it was a decade ago. During the 1980s, only four countries experienced declines in their rankings in the Human Development Index (an aggregate measure of economic well-being, health, and education); during the 1990s, 21 countries showed declines, and 14 of them were in sub-Saharan Africa. - Despite the growth in per capita food production in the past four decades, an estimated 852 million people were undernourished in 2000–02, up 37 million from 1997–99. Of these, nearly 95% live in developing countries (C8.ES). South Asia and sub-Saharan Africa, the regions with the largest numbers of undernourished people, are also the regions where growth in per capita food production has lagged the most. Most notably, per capita food production has declined in sub-Saharan Africa (C28.5.1). - Some 1.1 billion people still lack access to improved water supply and more than 2.6 billion have no access to improved sanitation. Water scarcity affects roughly 1–2 billion people worldwide. Since 1960, the ratio of water use to accessible supply has grown by 20% per decade (C7.ES, C7.2.3). The degradation of ecosystem services is harming many of the world’s poorest people and is sometimes the principal factor causing poverty. This is not to say that ecosystem changes such as increased food production have not also helped to lift hundreds of millions of people out of poverty. But these changes have harmed many other communities, and their plight has been largely over- looked. Examples of these impacts include: - Half of the urban population in Africa, Asia, Latin America, and the Caribbean suffers from one or more diseases associated with inadequate water and sanitation (C.SDM). Approximately 1.7 million people die annually as a result of inadequate water, sanitation, and hygiene (C7.ES). - The declining state of capture fisheries is reducing a cheap source of protein in developing countries. Per capita fish consumption in developing countries, excluding China, declined between 1985 and 1997 (C18.ES). - Desertification affects the livelihoods of millions of people, including a large portion of the poor in drylands (C22). The pattern of “winners” and “losers” associated with ecosystem changes, and in particular the impact of ecosystem changes on poor people, women, and indigenous peoples, has not been adequately taken into account in management decisions (R17). Changes in ecosystems typically yield benefits for some people and exact costs on others, who may either lose access to resources or livelihoods or be affected by externalities associated with the change. For several reasons, groups such as the poor, women, and indigenous communities have tended to be harmed by these changes. - Many changes have been associated with the privatization of what were formerly common pool resources, and the individuals who are dependent on those resources have thus lost rights to them. This has been particularly the case for indigenous peoples, forest-dependent communities, and other groups relatively marginalized from political and economic sources of power. - Some of the people and places affected by changes in ecosystems and ecosystem services are highly vulnerable and poorly equipped to cope with the major ecosystem changes that may occur (C6.ES). Highly vulnerable groups include those whose needs for ecosystem services already exceed the supply, such as people lacking adequate clean water supplies and people living in areas with declining per capita agricultural production. Vulnerability has also been increased by the growth of populations in ecosystems at risk of disasters such as floods or drought, often due to inappropriate policies that have encouraged this growth. Populations are growing in low-lying coastal areas and dryland ecosystems. In part due to the growth in these vulnerable populations, the number of natural disasters (floods, droughts, earthquakes, and so on) requiring international assistance has quadrupled over the past four decades. Finally, vulnerability has been increased when the resilience in either the social or ecological system has been diminished, as for example through the loss of drought-resistant crop varieties. - Significant differences between the roles and rights of men and women in many societies lead to women’s increased vulnerability to changes in ecosystem services. Rural women in developing countries are the main producers of staple crops like rice, wheat, and maize (R6 Box 6.1). Because the gendered division of labor within many societies places responsibility for routine care of the household with women, even when women also play important roles in agriculture, the degradation of ecosystem services such as water quality or quantity, fuelwood, agricultural or rangeland productivity often results in increased labor demands on women. This can affect the larger household by diverting time from food preparation, child care, education of children, and other beneficial activities (C6.3.3).Yet gender bias persists in agricultural policies in many countries, and rural women involved in agriculture tend to be the last to benefit from—or in some cases are negatively affected by— development policies and new technologies. - The reliance of the rural poor on ecosystem services is rarely measured and thus typically overlooked in national statistics and in poverty assessments, resulting in inappropriate strategies that do not take into account the role of the environment in poverty reduction. For example, a recent study that synthesized data from 17 countries found that 22% of household income for rural communities in forested regions comes from sources typically not included in national statistics, such as harvesting wild food, fuelwood, fodder, medicinal plants, and timber. These activities generated a much higher proportion of poorer families’ total income than wealthy families’—income that was of particular significance in periods of both predictable and unpredictable shortfalls in other livelihood sources (R17). Poor people have historically lost access to ecosystem services disproportionately as demand for those services has grown. Coastal habitats are often converted to other uses, frequently for aquaculture ponds or cage culturing of highly valued species such as shrimp and salmon. Despite the fact that the area is still used for food production, local residents are often displaced, and the food produced is usually not for local consumption but for export (C18.4.1). Many areas where overfishing is a concern are also low-income, food-deficit countries. For example, significant quantities of fish are caught by large distant water fleets in the exclusive economic zones of Mauritania, Senegal, Gambia, Guinea Bissau, and Sierra Leone. Much of the catch is exported or shipped directly to Europe, while compensation for access is often low compared with the value of the product landed overseas. These countries do not necessarily benefit through increased fish supplies or higher government revenues when foreign distant water fleets ply their waters (C18.5.1). Diminished human well-being tends to increase immediate dependence on ecosystem services, and the resultant additional pressure can damage the capacity of those ecosystems to deliver services (SG3.ES). As human well-being declines, the options available to people that allow them to regulate their use of natural resources at sustainable levels decline as well. This in turn increases pressure on ecosystem services and can create a downward spiral of increasing poverty and further degradation of ecosystem services. Dryland ecosystems tend to have the lowest levels of human well-being (C5.3.3). Drylands have the lowest per capita GDP and the highest infant mortality rates of all of the MA systems Nearly 500 million people live in rural areas in dry and semiarid lands, mostly in Asia and Africa but also in regions of Mexico and northern Brazil (C5 Box 5.2). The small amount of precipitation and its high variability limit the productive potential of drylands for settled farming and nomadic pastoralism, and many ways of expanding production (such as reducing fallow periods, overgrazing pasture areas, and cutting trees for fuelwood) result in environmental degradation. The combination of high variability in environmental conditions and relatively high levels of poverty leads to situations where human populations can be extremely sensitive to changes in the ecosystem (although the presence of these 1950s to the mid-1960s that had attracted people to the region, an estimated 250,000 people died, along with nearly all their cattle, sheep, and goats (C5 Box 5.1). Although population growth has historically been higher in high-productivity ecosystems or urban areas, during the 1990s it was highest in less productive ecosystems (C5.ES, C5.3.4). In that decade dryland systems (encompassing both rural and urban regions of drylands) experienced the highest, and mountain systems the second highest, population growth rate of any of the systems examined in the MA. (See Figure 3.7.) One factor that has helped reduce relative population growth in marginal lands has been migration of some people out of marginal lands to cities or to agriculturally productive regions; today the opportunities for such migration are limited due to a combination of factors, including poor economic growth in some cities, tighter immigration restrictions in wealthy countries, and limited availability of land in more productive regions. What are the most critical factors causing ecosystem changes? Natural or human-induced factors that directly or indirectly cause a change in an ecosystem are referred to as “drivers.” A direct driver unequivocally influences ecosystem processes. An indirect driver operates more diffusely, by altering one or more direct drivers. Drivers affect ecosystem services and human well-being at different spatial and temporal scales, which makes both their assessment and their management complex (SG7). Climate change may operate on a global or a large regional spatial scale; political change may operate at the scale of a nation or a municipal district. Sociocultural change typically occurs slowly, on a time scale of decades (although abrupt changes can sometimes occur, as in the case of wars or political regime changes), while economic changes tend to occur more rapidly. As a result of this spatial and temporal dependence of drivers, the forces that appear to be most significant at a particular location and time may not be the most significant over larger (or smaller) regions or time scales. In the aggregate and at a global scale, there are five indirect drivers of changes in ecosystems and their services: population change, change in economic activity, sociopolitical factors, cultural factors, and technological change. Collectively these factors influence the level of production and consumption of ecosystem services and the sustainability of production. Both economic growth and population growth lead to increased consumption of ecosystem services, although the harmful environmental impacts of any particular level of consumption depend on the efficiency of the technologies used in the production of the service. These factors interact in complex ways in different locations to change pressures on ecosystems and uses of ecosystem services. Driving forces are almost always multiple and interactive, so that a one-to-one linkage between particular driving forces and particular changes in ecosystems rarely exists. Even so, changes in any one of these indirect drivers generally result in changes in ecosystems. The causal linkage is almost always highly mediated by other factors, thereby complicating statements of causality or attempts to establish the proportionality of various contributors to changes. There are five major indirect drivers: - Demographic Drivers: Global population doubled in the past 40 years and increased by 2 billion people in the last 25 years, reaching 6 billion in 2000 (S7.2.1). Developing countries have accounted for most recent population growth in the past quarter-century, but there is now an unprecedented diversity of demographic patterns across regions and countries. Some high-income countries such as the United States are still experiencing high rates of population growth, while some developing countries such as China, Thailand, and North and South Korea have very low rates. In the United States, high population growth is due primarily to high levels of immigration. About half the people in the world now live in urban areas (although urban areas cover less than 3% of the terrestrial surface), up from less than 15% at the start of the twentieth century (C27.1). High-income countries typically have populations that are 70–80% urban. Some developing-country regions, such as parts of Asia, are still largely rural, while Latin America, at 75% urban, is indistinguishable from high-income countries in this regard (S7.2.1). - Economic Drivers: Global economic activity increased nearly sevenfold between 1950 and 2000 (S7.SDM). With rising per capita income, the demand for many ecosystem services grows. At the same time, the structure of consumption changes. In the case of food, for example, as income grows the share of additional income spent on food declines, the importance of starchy staples (such as rice, wheat, and potatoes) declines, diets include more fat, meat and fish, and fruits and vegetables, and the proportionate consumption of industrial goods and services rises (S7.2.2). In the late twentieth century, income was distributed unevenly, both within countries and around the world. The level of per capita income was highest in North America, Western Europe, Australasia, and Northeast Asia, but both GDP growth rates and per capita GDP growth rates were highest in South Asia, China, and parts of South America (S7.2.2). (See Figures 4.1 and 4.2.) Growth in international trade flows has exceeded growth in global production for many years, and the differential may be growing. In 2001, international trade in goods was equal to 40% of gross world product. (S7.2.2). Taxes and subsidies are important indirect drivers of ecosystem change. Fertilizer taxes or taxes on excess nutrients, for example, provide an incentive to increase the efficiency of the use of fertilizer applied to crops and thereby reduce negative externalities. Currently, many subsidies substantially increase rates of resource consumption and increase negative externalities. Annual subsidies to conventional energy, which encourage greater use of fossil fuels and consequently emissions of greenhouse gases, are estimated to have been $250–300 billion in the mid-1990s (S7.ES). The 2001–03 average subsidies paid to the agricultural sectors of OECD countries were over $324 billion annually (S7.ES), encouraging greater food production and associated water consumption and nutrient and pesticide release. At the same time, many developing countries also have significant agricultural production subsidies. - Sociopolitical Drivers: Sociopolitical drivers encompass the forces influencing decision-making and include the quantity of public participation in decision-making, the groups participating in public decision-making, the mechanisms of dispute resolution, the role of the state relative to the private sector, and levels of education and knowledge (S7.2.3). These factors in turn influence the institutional arrangements for ecosystem management, as well as property rights over ecosystem services. Over the past 50 years there have been significant changes in sociopolitical drivers. There is a declining trend in centralized authoritarian governments and a rise in elected democracies. The role of women is changing in many countries, average levels of formal education are increasing, and there has been a rise in civil society (such as increased involvement of NGOs and grassroots organizations in decision-making processes). The trend toward democratic institutions has helped give power to local communities, especially women and resource-poor households (S7.2.3). There has been an increase in multilateral environmental agreements. The importance of the state relative to the private sector—as a supplier of goods and services, as a source of employment, and as a source of innovation—is declining. - Cultural and Religious Drivers: To understand culture as a driver of ecosystem change, it is most useful to think of it as the values, beliefs, and norms that a group of people share. In this sense, culture conditions individuals’ perceptions of the world, influences what they consider important, and suggests what courses of action are appropriate and inappropriate (S7.2.4). Broad comparisons of whole cultures have not proved useful because they ignore vast variations in values, beliefs, and norms within cultures. Nevertheless, cultural differences clearly have important impacts on direct drivers. Cultural factors, for example, can influence consumption behavior (what and how much people consume) and values related to environmental stewardship, and they may be particularly important drivers of environmental change. - Science and Technology: The development and diffusion of scientific knowledge and technologies that exploit that knowledge has profound implications for ecological systems and human well-being. The twentieth century saw tremendous advances in understanding how the world works physically, chemically, biologically, and socially and in the applications of that knowledge to human endeavors. Science and technology are estimated to have accounted for more than one third of total GDP growth in the United States from 1929 to the early 1980s, and for 16–47% of GDP growth in selected OECD countries in 1960–95 (S7.2.5). The impact of science and technology on ecosystem services is most evident in the case of food production. Much of the increase in agricultural output over the past 40 years has come from an increase in yields per hectare rather than an expansion of area under cultivation. For instance, wheat yields rose 208%, rice yields rose 109%, and maize yields rose 157% in the past 40 years in developing countries (S7.2.5). At the same time, technological advances can also lead to the degradation of ecosystem services. Advances in fishing technologies, for example, have contributed significantly to the depletion of marine fish stocks. Consumption of ecosystem services is slowly being decoupled from economic growth. Growth in the use of ecosystem services over the past five decades was generally much less than the growth in GDP. This change reflects structural changes in economies, but it also results from new technologies and new management practices and policies that have increased the efficiency with which ecosystem services are used and provided substitutes for some services. Even with this progress, though, the absolute level of consumption of ecosystem services continues to grow, which is consistent with the pattern for the consumption of energy and materials such as metals: in the 200 years for which reliable data are available, growth of consumption of energy and materials has outpaced increases in materials and energy efficiency, leading to absolute increases of materials and energy use (S7.ES). Global trade magnifies the effect of governance, regulations, and management practices on ecosystems and their services, enhancing good practices but worsening the damage caused by poor practices (R8, S7). Increased trade can accelerate degradation of ecosystem services in exporting countries if their policy, regulatory, and management systems are inadequate. At the same time, international trade enables comparative advantages to be exploited and accelerates the diffusion of more-efficient technologies and practices. For example, the increased demand for forest products in many countries stimulated by growth in forest products trade can lead to more rapid degradation of forests in countries with poor systems of regulation and management, but can also stimulate a “virtuous cycle” if the regulatory framework is sufficiently robust to prevent resource degradation while trade, and profits, increase. While historically most trade related to ecosystems has involved provisioning services such as food, timber, fiber, genetic resources, and biochemicals, one regulating service—climate regulation, or more specifically carbon sequestration—is now also traded internationally. Urban demographic and economic growth has been increasing pressures on ecosystems globally, but affluent rural and sub- urban living often places even more pressure on ecosystems (C27.ES). Dense urban settlement is considered to be less environmentally burdensome than urban and suburban sprawl. And the movement of people into urban areas has significantly lessened pressure on some ecosystems and, for example, has led to the reforestation of some parts of industrial countries that had been deforested in previous centuries. At the same time, urban centers facilitate human access to and management of ecosystem services through, for example, economies of scale related to the construction of piped water systems in areas of high population density. Most of the direct drivers of change in ecosystems and biodiversity currently remain constant or are growing in intensity in most ecosystems. (See Figure 4.3.) The most important direct drivers of change in ecosystems are habitat change (land use change and physical modification of rivers or water withdrawal from rivers), overexploitation, invasive alien species, pollution, and climate change. For terrestrial ecosystems, the most important direct drivers of change in ecosystem services in the past 50 years, in the aggregate, have been land cover change (in particular, conversion to cropland) and the application of new technologies (which have contributed significantly to the increased supply of services such as food, timber, and fiber) (CWG, S7.2.5, SG8.ES). In 9 of the 14 terrestrial biomes examined in the MA, between one half and one fifth of the area has been transformed, largely to croplands (C4.ES). Only biomes relatively unsuited to crop plants, such as deserts, boreal forests, and tundra, have remained largely untransformed by human action. Both land cover changes and the management practices and technologies used on lands may cause major changes in ecosystem services. New technologies have resulted in significant increases in the supply of some ecosystem services, such as through increases in agricultural yield. In the case of cereals, for example, from the mid-1980s to the late 1990s the global area under cereals fell by around 0.3% a year, while yields increased by about 1.2% a year (C26.4.1). For marine ecosystems and their services, the most important direct driver of change in the past 50 years, in the aggregate, has been fishing (C18). At the beginning of the twenty-first century, the biological capability of commercially exploited fish stocks was probably at a historical low. FAO estimates that about half of the commercially exploited wild marine fish stocks for which information is available are fully exploited and offer no scope for increased catches, and a further quarter are over exploited. (C8.2.2). As noted in Key Question 1, fishing pressure is so strong in some marine systems that the biomass of some targeted species, especially larger fishes, and those caught incidentally has been reduced to one tenth of levels prior to the onset of industrial fishing (C18.ES). Fishing has had a particularly significant impact in coastal areas but is now also affecting the open oceans. For freshwater ecosystems and their services, depending on the region, the most important direct drivers of change in the past 50 years include modification of water regimes, invasive species, and pollution, particularly high levels of nutrient loading. It is speculated that 50% of inland water ecosystems (excluding large lakes and closed seas) were converted during the twentieth century (C20.ES). Massive changes have been made in water regimes: in Asia, 78% of the total reservoir volume was constructed in the last decade, and in South America almost 60% of all reservoirs have been built since the 1980s (C20.4.2). The introduction of non-native invasive species is one of the major causes of species extinction in freshwater systems. While the presence of nutrients such as phosphorus and nitrogen is necessary for biological systems, high levels of nutrient loading cause significant eutrophication of water bodies and contribute to high levels of nitrate in drinking water in some locations. (The nutrient load refers to the total amount of nitrogen or phosphorus entering the water during a given time.) Non-point pollution sources such as storm water runoff in urban areas, poor or nonexistent sanitation facilities in rural areas, and the flushing of livestock manure by rainfall and snowmelt are also causes of contamination (C20.4.5). Pollution from point sources such as mining has had devastating local and regional impacts on the biota of inland waters. Coastal ecosystems are affected by multiple direct drivers. Fishing pressures in coastal ecosystems are compounded by a wide array of other drivers, including land-, river-, and ocean- based pollution, habitat loss, invasive species, and nutrient loading. Although human activities have increased sediment flows in rivers by about 20%, reservoirs and water diversions prevent about 30% of sediments from reaching the oceans, resulting in a net reduction of 10% in the sediment delivery to estuaries, which are key nursery areas and fishing grounds (C19.ES). Approximately 17% of the world lives within the boundaries of the MA coastal system (up to an elevation of 50 meters above sea level and no further than 100 kilometers from a coast), and approximately 40% live in the full area within 50 kilometers of a coast. And the absolute number is increasing through a combination of in-migration, high reproduction rates, and tourism (C.SDM). Demand on coastal space for shipping, waste disposal, military and security uses, recreation, and aquaculture is increasing. The greatest threat to coastal systems is the development- related conversion of coastal habitats such as forests, wetlands, and coral reefs through coastal urban sprawl, resort and port development, aquaculture, and industrialization. Dredging, reclamation and destructive fishing also account for widespread, effectively irreversible destruction. Shore protection structures and engineering works (beach armoring, causeways, bridges, and so on), by changing coastal dynamics, have impacts extending beyond their direct footprints. Nitrogen loading to the coastal zone has increased by about 80% worldwide and has driven coral reef community shifts (C.SDM). Over the past four decades, excessive nutrient loading has emerged as one of the most important direct drivers of ecosystem change in terrestrial, freshwater, and marine ecosystems. (See Table 4.1.) While the introduction of nutrients into ecosystems can have both beneficial effects (such as increased crop productivity) and adverse effects (such as eutrophication of inland and coastal waters), the beneficial effects will eventually reach a plateau as more nutrients are added (that is, additional inputs will not lead to further increases in crop yield), while the harmful effects will continue to grow. Synthetic production of nitrogen fertilizer has been an important driver for the remarkable increase in food production that has occurred during the past 50 years (S7.3.2). World consumption of nitrogenous fertilizers grew nearly eightfold between 1960 and 2003, from 10.8 million tons to 85.1 million tons. As much as 50% of the nitrogen fertilizer applied may be lost to the environment, depending on how well the application is managed. Since excessive nutrient loading is largely the result of applying more nutrients than crops can use, it harms both farm incomes and the environment (S7.3.2). Excessive flows of nitrogen contribute to eutrophication of freshwater and coastal marine ecosystems and acidification of freshwater and terrestrial ecosystems (with implications for biodiversity in these ecosystems). To some degree, nitrogen also plays a role in the creation of ground-level ozone (which leads to loss of agricultural and forest productivity), destruction of ozone in the stratosphere (which leads to depletion of the ozone layer and increased UV-B radiation on Earth, causing increased incidence of skin cancer), and climate change. The resulting health effects include the consequences of ozone pollution on asthma and respiratory function, increased allergies and asthma due to increased pollen production, the risk of blue-baby syndrome, increased risk of cancer and other chronic diseases from nitrates in drinking water, and increased risk of a variety of pulmonary and cardiac diseases from production of fine particles in the atmosphere (R9.ES). Phosphorus application has increased threefold since 1960, with a steady increase until 1990 followed by a leveling off at a level approximately equal to applications in the 1980s. While phosphorus use has increasingly concentrated on phosphorus-deficient soils, the growing phosphorus accumulation in soils contributes to high levels of phosphorus runoff. As with nitrogen loading, the potential consequences include eutrophication of coastal and freshwater ecosystems, which can lead to degraded habitat for fish and decreased quality of water for consumption by humans and livestock. Many ecosystem services are reduced when inland waters and coastal ecosystems become eutrophic. Water from lakes that experience algal blooms is more expensive to purify for drinking or other industrial uses. Eutrophication can reduce or eliminate fish populations. Possibly the most apparent loss in services is the loss of many of the cultural services provided by lakes. Foul odors of rotting algae, slime-covered lakes, and toxic chemicals produced by some blue-green algae during blooms keep people from swimming, boating, and otherwise enjoying the aesthetic value of lakes (S7.3.2). Climate change in the past century has already had a measurable impact on ecosystems. Earth’s climate system has changed since the preindustrial era, in part due to human activities, and it is projected to continue to change throughout the twenty-first century. During the last 100 years, the global mean surface temperature has increased by about 0.6o Celsius, precipitation patterns have changed spatially and temporally, and global average sea level rose by 0.1–0.2 meters (S7.ES). Observed changes in climate, especially warmer regional temperatures, have already affected biological systems in many parts of the world. There have been changes in species distributions, population sizes, and the timing of reproduction or migration events, as well as an increase in the frequency of pest and disease outbreaks, especially in forested systems. The growing season in Europe has lengthened over the last 30 years (R13.1.3). Although it is not possible to determine whether the extreme temperatures were a result of human-induced climate change, many coral reefs have undergone major, although often partially reversible, bleaching episodes when sea surface temperatures have increased during one month by 0.5–1o Celsius above the average of the hottest months. Extensive coral mortality has occurred with observed local increases in temperature of 3o Celsius (R13.1.3). How might ecosystems and their services change in the future under various plausible scenarios? The MA developed four global scenarios to explore plausible futures for ecosystems and human well-being. (See Box 5.1.) The scenarios were developed with a focus on conditions in 2050, although they include some information through the end of the century. They explored two global development paths, one in which the world becomes increasingly globalized and the other in which it becomes increasingly regionalized, as well as two different approaches to ecosystem management, one in which actions are reactive and most problems are addressed only after they become obvious and the other in which ecosystem management is proactive and policies deliberately seek to maintain ecosystem services for the long term: - Global Orchestration: This scenario depicts a globally connected society that focuses on global trade and economic liberalization and takes a reactive approach to ecosystem problems but that also takes strong steps to reduce poverty and inequality and to invest in public goods such as infrastructure and education. Economic growth is the highest of the four scenarios, while this scenario is assumed to have the lowest population in 2050. - Order from Strength: This scenario represents a regionalized and fragmented world that is concerned with security and protection, emphasizes primarily regional markets, pays little attention to public goods, and takes a reactive approach to ecosystem problems. Economic growth rates are the lowest of the scenarios (particularly low in developing countries) and decrease with time, while population growth is the highest. - Adapting Mosaic: In this scenario, regional watershed-scale ecosystems are the focus of political and economic activity. Local institutions are strengthened and local ecosystem management strategies are common; societies develop a strongly proactive approach to the management of ecosystems. Economic growth rates are somewhat low initially but increase with time, and the population in 2050 is nearly as high as in Order from Strength. - TechnoGarden: This scenario depicts a globally connected world relying strongly on environmentally sound technology, using highly managed, often engineered, ecosystems to deliver ecosystem services, and taking a proactive approach to the management of ecosystems in an effort to avoid problems. Economic growth is relatively high and accelerates, while population in 2050 is in the mid-range of the scenarios. The scenarios are not predictions; instead, they were developed to explore the unpredictable and uncontrollable features of change in ecosystem services and a number of socioeconomic factors. No scenario represents business as usual, although all begin from current conditions and trends. The future will represent a mix of approaches and consequences described in the scenarios, as well as events and innovations that have not yet been imagined. No scenario is likely to match the future as it actually occurs. These four scenarios were not designed to explore the entire range of possible futures for ecosystem services—other scenarios could be developed with either more optimistic or more pessimistic outcomes for ecosystems, their services, and human well-being. The scenarios were developed using both quantitative models and qualitative analysis. For some drivers (such as land use change and carbon emissions) and some ecosystem services (such as water withdrawals and food production), quantitative projections were calculated using established, peer-reviewed global models. Other drivers (such as economic growth and rates of technological change), ecosystem services (particularly supporting and cultural services such as soil formation and recreational opportunities), and human well-being indicators (such as human health and social relations) were estimated qualitatively. In general, the quantitative models used for these scenarios addressed incremental changes but failed to address thresholds, risk of extreme events, or impacts of large, extremely costly, or irreversible changes in ecosystem services. These phenomena were addressed qualitatively, by considering the risks and impacts of large but unpredictable ecosystem changes in each scenario. Box 5.1. MA Scenarios The Global Orchestration scenario depicts a globally connected society in which policy reforms that focus on global trade and economic liberalization are used to reshape economies and governance, emphasizing the creation of markets that allow equitable participation and provide equitable access to goods and services. These policies, in combination with large investments in global public health and the improvement of education worldwide, generally succeed in promoting economic expansion and lifting many people out of poverty into an expanding global middle class. Supranational institutions in this globalized scenario are well placed to deal with global environmental problems such as climate change and fisheries decline. However, the reactive approach to ecosystem management makes people vulnerable to surprises arising from delayed action. While the focus is on improving the well-being of all people, environmental problems that threaten human well-being are only considered after they become apparent. Growing economies, expansion of education, and growth of the middle class lead to demands for cleaner cities, less pollution, and a more beautiful environment. Rising income levels bring about changes in global consumption patterns, boosting demand for ecosystem services, including agricultural products such as meat, fish, and vegetables. Growing demand for these services leads to declines in other ones, as forests are converted into cropped area and pasture and the services they formerly provided decline. The problems related to increasing food production, such as loss of wildlands, are not apparent to most people who live in urban areas. They therefore receive only limited attention. global economyGlobal economic expansion expropriates or degrades many of the ecosystem services poor people once depended on for survival. While economic growth more than compensates for these losses in some regions by increasing the ability to find substitutes for particular ecosystem services, in many other places, it does not. An increasing number of people are affected by the loss of basic ecosystem services essential for human life. While risks seem manageable in some places, in other places there are sudden, unexpected losses as ecosystems cross thresholds and degrade irreversibly. Loss of potable water supplies, crop failures, floods, species invasions, and outbreaks of environmental pathogens increase in frequency. The expansion of abrupt, unpredictable changes in ecosystems, many with harmful effects on increasingly large numbers of people, is the key challenge facing managers of ecosystem services. Order from Strength The Order from Strength scenario represents a regionalized and fragmented world that is concerned with security and protection, emphasizes primarily regional markets, and pays little attention to common goods. Nations see looking after their own interests as the best defense against economic insecurity, and the movement of goods, people, and information is strongly regulated and policed. The role of government expands as oil companies, water utilities, and other strategic businesses are either nationalized or subjected to more state oversight. Trade is restricted, large amounts of money are invested in security systems, and technological change slows due to restrictions on the flow of goods and information. Regionalization exacerbates global inequality. Treaties on global climate change, international fisheries, and trade in endangered species are only weakly and haphazardly implemented, resulting in degradation of the global commons. Local problems often go unresolved, but major problems are sometimes handled by rapid disaster relief to at least temporarily resolve the immediate crisis. Many powerful countries cope with local problems by shifting burdens to other, less powerful ones, increasing the gap between rich and poor. In particular, natural resource–intensive industries are moved from wealthier nations to poorer, less powerful ones. Inequality increases considerably within countries as well. Ecosystem services become more vulnerable, fragile, and variable in Order from Strength. For example, parks and reserves exist within fixed boundaries, but climate changes around them, leading to the unintended extirpation of many species. Conditions for crops are often suboptimal, and the ability of societies to import alternative foods is diminished by trade barriers. As a result, there are frequent shortages of food and water, particularly in poor regions. Low levels of trade tend to restrict the number of invasions by exotic species; ecosystems are less resilient, however, and invaders are therefore more often successful when they arrive. In the Adapting Mosaic scenario, regional watershed-scale ecosystems are the focus of political and economic activity. This scenario sees the rise of local ecosystem management strategies and the strengthening of local institutions. Investments in human and social capital are geared toward improving knowledge about ecosystem functioning and management, which results in a better understanding of resilience, fragility, and local flexibility of ecosystems. There is optimism that we can learn, but humility about preparing for surprises and about our ability to know everything about managing ecosystems. There is also great variation among nations and regions in styles of governance, including management of ecosystem services. Some regions explore actively adaptive management, investigating alternatives through experimentation. Others use bureaucratically rigid methods to optimize ecosystem performance. Great diversity exists in the outcome of these approaches: some areas thrive, while others develop severe inequality or experience ecological degradation. Initially, trade barriers for goods and products are increased, but barriers for information nearly disappear (for those who are motivated to use them) due to improving communication technologies and rapidly decreasing costs of access to information. Eventually, the focus on local governance leads to failures in managing the global commons. Problems like climate change, marine fisheries, and pollution grow worse, and global environmental problems intensify. Communities slowly realize that they cannot manage their local areas because global and regional problems are infringing on them, and they begin to develop networks among communities, regions, and even nations to better manage the global commons. Solutions that were effective locally are adopted among networks. These networks of regional successes are especially common in situations where there are mutually beneficial opportunities for coordination, such as along river valleys. Sharing good solutions and discarding poor ones eventually improves approaches to a variety of social and environmental problems, ranging from urban poverty to agricultural water pollution. As more knowledge is collected from successes and failures, provision of many services improves. The TechnoGarden scenario depicts a globally connected world relying strongly on technology and highly managed, often engineered ecosystems to deliver ecosystem services. Overall efficiency of ecosystem service provision improves, but it is shadowed by the risks inherent in large-scale human-made solutions and rigid control of ecosystems. Technology and market-oriented institutional reform are used to achieve solutions to environmental problems. These solutions are designed to benefit both the economy and the environment. These changes co-develop with the expansion of property rights to ecosystem services, such as requiring people to pay for pollution they create or paying people for providing key ecosystem services through actions such as preservation of key watersheds. Interest in maintaining, and even increasing, the economic value of these property rights, combined with an interest in learning and information, leads to a flowering of ecological engineering approaches for managing ecosystem services. Investment in green technology is accompanied by a significant focus on economic development and education, improving people’s lives and helping them understand how ecosystems make their livelihoods possible. A variety of problems in global agriculture are addressed by focusing on the multi-functional aspects of agriculture and a global reduction of agricultural subsidies and trade barriers. Recognition of the role of agricultural diversification encourages farms to produce a variety of ecological services rather than simply maximizing food production. The combination of these movements stimulates the growth of new markets for ecosystem services, such as tradable nutrient runoff permits, and the development of technology for increasingly sophisticated ecosystem management. Gradually, environmental entrepreneurship expands as new property rights and technologies co-evolve to stimulate the growth of companies and cooperatives providing reliable ecosystem services to cities, towns, and individual property owners. Innovative capacity expands quickly in developing nations. The reliable provision of ecosystem services as a component of economic growth, together with enhanced uptake of technology due to rising income levels, lifts many of the world’s poor into a global middle class. Elements of human well-being associated with social relations decline in this scenario due to great loss of local culture, customs, and traditional knowledge and the weakening of civil society institutions as an increasing share of interactions take place over the Internet. While the provision of basic ecosystem services improves the well-being of the world’s poor, the reliability of the services, especially in urban areas, become more critical and is increasingly difficult to ensure. Not every problem has succumbed to technological innovation. Reliance on technological solutions sometimes creates new problems and vulnerabilities. In some cases, societies seem to be barely ahead of the next threat to ecosystem services. In such cases new problems often seem to emerge from the last solution, and the costs of managing the environment are continually rising. Environmental breakdowns that affect large numbers of people become more common. Sometimes new problems seem to emerge faster than solutions. The challenge for the future is to learn how to organize socio-ecological systems so that ecosystem services are maintained without taxing society’s ability to implement solutions to novel, emergent problems. Projected Changes in Indirect and Direct Drivers under MA Scenarios In the four MA scenarios, during the first half of the twenty- first century the array of both indirect and direct drivers affecting ecosystems and their services is projected to remain largely the same as over the last half-century, but the relative importance of different drivers will begin to change. Some factors (such as global population growth) will begin to decline in importance and others (distribution of people, climate change, and changes to nutrient cycles) will gain more importance. (See Tables 5.1, 5.2, and 5.3.) Statements of certainty associated with findings related to the MA scenarios are conditional statements; they refer to level of certainty or uncertainty in the particular projection should that scenario and its associated changes in drivers unfold. They do not indicate the likelihood that any particular scenario and its associated projection will come to pass. With that caveat in mind, the four MA scenarios describe these changes between 2000 and 2050 (or in some cases 2100): - Population is projected to grow to 8.1–9.6 billion in 2050 (medium to high certainty) and to 6.8–10.5 billion in 2100, depending on the scenario (S7.2.1). (See Figure 5.1.) The rate of global population growth has already peaked, at 2.1% per year in the late 1960s, and had fallen to 1.35% per year in 2000, when global population reached 6 billion (S7.ES). Population growth over the next several decades is expected to be concentrated in the poorest, urban communities in sub-Saharan Africa, South Asia, and the Middle East (S7.ES). - Per capita income is projected to increase two- to fourfold, depending on the scenario (low to medium certainty) (S7.2.2). Gross world product is projected to increase roughly three to sixfold in the different scenarios. Increasing income leads to increasing per capita consumption in most parts of the world for most resources and it changes the structure of consumption. For example, diets tend to become higher in animal protein as income rises. - Land use change (primarily the continuing expansion of agriculture) is projected to continue to be a major direct driver of change in terrestrial and freshwater ecosystems (medium to high certainty) (S9.ES). At the global level and across all scenarios, land use change is projected to remain the dominant driver of biodiversity change in terrestrial ecosystems, consistent with the pattern over the past 50 years, followed by changes in climate and nitrogen deposition (S10.ES). However, other direct drivers may be more important than land use change in particular biomes. For example, climate change is likely to be the dominant driver of biodiversity change in tundra and deserts. Species invasions and water extraction are important drivers for freshwater ecosystems. - Nutrient loading is projected to become an increasingly severe problem, particularly in developing countries. Nutrient loading already has major adverse effects on freshwater ecosystems and coastal regions in both industrial and developing countries. These impacts include toxic algae blooms, other human health problems, fish kills, and damage to habitats such as coral reefs. Three out of the four MA scenarios project that the global flux of nitrogen to coastal ecosystems will increase by 10–20% by 2030 (medium certainty) (S188.8.131.52). (See Figure 5.2.) River nitrogen will not change in most industrial countries, while a 20– 30% increase is projected for developing countries, particularly in Asia. Climate change and its impacts (such as sea level rise) are projected to have an increasing effect on biodiversity and ecosystem services (medium certainty) (S9.ES). Under the four MA scenarios, global temperature is expected to increase significantly—1.5– 2.0o Celsius above preindustrial level in 2050 and 2.0–3.5o Celsius above it in 2100, depending on the scenario and using a median estimate for climate sensitivity (2.5oC for a doubling of the CO2 concentration) (medium certainty). The IPCC reported a range of temperature increase for the scenarios used in the Third Assessment Report of 2.0–6.4o Celsius compared with preindustrial levels, with about half of this range attributable to the differences in scenarios and the other half to differences in climate models. The smaller, somewhat lower, range of the MA scenarios is thus partly a result of using only one climate model (and one estimate of climate sensitivity) but also the result of including climate policy responses in some scenarios as well as differences in assumptions for economic and population growth. The scenarios project an increase in global average precipitation (medium certainty), but some areas will become more arid while others will become more moist. Climate change will directly alter ecosystem services, for example, by causing changes in the productivity and growing zones of cultivated and non-cultivated vegetation. It is also projected to change the frequency of extreme events, with associated risks to ecosystem services. Finally, it is projected to indirectly affect ecosystem services in many ways, such as by causing sea level to rise, which threatens mangroves and other vegetation that now protect shorelines. Climate change is projected to further adversely affect key development challenges, including providing clean water, energy services, and food; maintaining a healthy environment; and conserving ecological systems, their biodiversity, and their associated ecological goods and services (R13.1.3). - Climate change is projected to exacerbate the loss of biodiversity and increase the risk of extinction for many species, especially those already at risk due to factors such as low population numbers, restricted or patchy habitats, and limited climatic ranges (medium to high certainty). - Water availability and quality are projected to decrease in many arid and semiarid regions (high certainty). - The risk of floods and droughts is projected to increase (high certainty). - Sea level is projected to rise by 8–88 centimeters. - The reliability of hydropower and biomass production is projected to decrease in some regions (high certainty). - The incidence of vector-borne diseases such as malaria and dengue and of waterborne diseases such as cholera is projected to increase in many regions (medium to high certainty), and so too are heat stress mortality and threats of decreased nutrition in other regions, along with severe weather traumatic injury and death (high certainty). - Agricultural productivity is projected to decrease in the tropics and sub-tropics for almost any amount of warming (low to medium certainty), and there are projected adverse effects on fisheries. - Projected changes in climate during the twenty-first century are very likely to be without precedent during at least the past 10,000 years and, combined with land use change and the spread of exotic or alien species, are likely to limit both the capability of species to migrate and the ability of species to persist in fragmented habitats. Changes in Ecosystems Rapid conversion of ecosystems is projected to continue under all MA scenarios in the first half of the twenty-first century. Roughly 10–20% (low to medium certainty) of current grassland and forestland is projected to be converted to other uses between now and 2050, mainly due to the expansion of agriculture and, secondarily, because of the expansion of cities and infrastructure (S9.ES). The biomes projected to lose habitat and local species at the fastest rate in the next 50 years are warm mixed forests, savannas, scrub, tropical forests, and tropical woodlands (S10.ES). Rates of conversion of ecosystems are highly dependent on future development scenarios and in particular on changes in population, wealth, trade, and technology. Habitat loss in terrestrial environments is projected to accelerate decline in local diversity of native species in all four scenarios by 2050 (high certainty) (S.SDM). Loss of habitat results in the immediate extirpation of local populations and the loss of the services that these populations provided. The habitat losses projected in the MA scenarios will lead to global extinctions as numbers of species approach equilibrium with the remnant habitat (high certainty) (S.SDM, S10.ES). The equilibrium number of plant species is projected to be reduced by roughly 10–15% as a result of habitat loss from 1970 to 2050 in the MA scenarios (low certainty). Other terrestrial taxonomic groups are likely to be affected to a similar extent. The pattern of extinction through time cannot be estimated with any precision, because some species will be lost immediately when their habitat is modified but others may persist for decades or centuries. Time lags between habitat reduction and extinction provide an opportunity for humans to deploy restoration practices that may rescue those species that otherwise may be in a trajectory toward extinction. Significant declines in freshwater fish species diversity are also projected due to the combined effects of climate change, water withdrawals, eutrophication, acidification, and increased invasions by non-indigenous species (low certainty). Rivers that are expected to lose fish species are concentrated in poor tropical and sub-tropical countries. Changes in Ecosystem Services and Human Well-being In three of the four MA scenarios, ecosystem services show net improvements in at least one of the three categories of provisioning, regulating, and cultural services (S.SDM). These three categories of ecosystem services are all in worse condition in 2050 than they are today in only one MA scenario—Order from Strength. (See Figure 5.3.) However, even in scenarios showing improvement in one or more categories of ecosystem services, biodiversity loss continues at high rates. The following changes to ecosystem services and human well- being were common to all four MA scenarios and thus may be likely under a wide range of plausible futures (S.SDM): - Human use of ecosystem services increases substantially under all MA scenarios during the next 50 years. In many cases this is accompanied by degradation in the quality of the service and sometimes, in cases where the service is being used unsustainably, a reduction in the quantity of the service available. (See Appendix A.) The combination of growing populations and growing per capita consumption increases the demand for ecosystem services, including water and food. For example, demand for food crops (measured in tons) is projected to grow by 70–85% by 2050 (S9.4.1) and global water withdrawals increase by 20–85% across the MA scenarios (S9 Fig 9.35). Water withdrawals are projected to increase significantly in developing countries but to decline in OECD countries (medium certainty) (S.SDM). In some cases, this growth in demand will be met by unsustainable uses of the services, such as through continued depletion of marine fisheries. Demand is dampened somewhat by increasing efficiency in use of resources. The quantity and quality of ecosystem services will change dramatically in the next 50 years as productivity of some services is increased to meet demand, as humans use a greater fraction of some services, and as some services are diminished or degraded. Ecosystem services that are projected to be further impaired by ecosystem change include fisheries, food production in drylands, quality of fresh waters, and cultural services. - Food security is likely to remain out of reach for many people. Child malnutrition will be difficult to eradicate even by 2050 (low to medium certainty) and is projected to increase in some regions in some MA scenarios, despite increasing food supply under all four scenarios (medium to high certainty) and more diversified diets in poor countries (low to medium certainty) (S.SDM). Three of the MA scenarios project reductions in child undernourishment by 2050 of between 10% and 60%, but undernourishment increases by 10% in Order from Strength (low certainty) (S9.4.1). (See Figure 5.4.) This is due to a combination of factors related to food supply systems (inadequate investments in food production and its supporting infrastructure resulting in low productivity increases, varying trade regimes) and food demand and accessibility (continuing poverty in combination with high population growth rates, lack of food infrastructure investments). - Vast, complex changes with great geographic variability are projected to occur in world freshwater resources and hence in their provisioning of ecosystem services in all scenarios (S.SDM). Climate change will lead to increased precipitation over more than half of Earth’s surface, and this will make more water available to society and ecosystems (medium certainty). However, increased precipitation is also likely to increase the frequency of flooding in many areas (high certainty). Increases in precipitation will not be universal, and climate change will also cause a substantial decrease in precipitation in some areas, with an accompanying decrease in water availability (medium certainty). These areas could include highly populated arid regions such as the Middle East and Southern Europe (low to medium certainty). While water withdrawals decrease in most industrial countries, they are expected to increase substantially in Africa and some other developing regions, along with wastewater discharges, overshadowing the possible benefits of increased water availability (medium certainty). - A deterioration of the services provided by freshwater resources (such as aquatic habitat, fish production, and water supply for households, industry, and agriculture) is expected in developing countries under the scenarios that are reactive to environmental problems (S9.ES). Less severe but still important declines are expected in the scenarios that are more proactive about environmental problems (medium certainty). - Growing demand for fish and fish products leads to an increasing risk of a major and long-lasting collapse of regional marine fisheries (low to medium certainty) (S.SDM). Aquaculture may relieve some of this pressure by providing for an increasing fraction of fish demand. However, this would require aquaculture to reduce its current reliance on marine fish as a feed source. The future contribution of terrestrial ecosystems to the regulation of climate is uncertain (S9.ES). Carbon release or uptake by ecosystems affects the CO2 and CH4 content of the atmosphere at the global scale and thereby affects global climate. Currently, the biosphere is a net sink of carbon, absorbing about 1–2 gigatons a year, or approximately 20% of fossil fuel emissions. It is very likely that the future of this service will be greatly affected by expected land use change. In addition, a higher atmospheric CO2 concentration is expected to enhance net productivity, but this does not necessarily lead to an increase in the carbon sink. The limited understanding of soil respiration processes generates uncertainty about the future of the carbon sink. There is medium certainty that climate change will increase terrestrial fluxes of CO2 and CH4 in some regions (such as in Arctic tundra). Dryland ecosystems are particularly vulnerable to changes over the next 50 years. The combination of low current levels of human well-being (high rates of poverty, low per capita GDP, high infant mortality rates), a large and growing population, high variability of environmental conditions in dryland regions, and high sensitivity of people to changes in ecosystem services means that continuing land degradation could have profoundly negative impacts on the well-being of a large number of people in these regions (S.SDM). Subsidies of food and water to people in vulnerable drylands can have the unintended effect of increasing the risk of even larger breakdowns of ecosystem services in future years. Local adaptation and conservation practices can mitigate some losses of dryland ecosystem services, although it will be difficult to reverse trends toward loss of food production capacity, water supplies, and biodiversity in drylands. While human health improves under most MA scenarios, under one plausible future health and social conditions in the North and South could diverge (S11). In the more promising scenarios related to health, the number of undernourished children is reduced, the burden of epidemic diseases such as HIV/ AIDS, malaria, and tuberculosis would be lowered, improved vaccine development and distribution could allow populations to cope comparatively well with the next influenza pandemic, and the impact of other new diseases such as SARS would also be limited by well-coordinated public health measures. Under the Order from Strength scenario, however, it is plausible that the health and social conditions for the North and South could diverge as inequality increases and as commerce and scientific exchanges between industrial and developing countries decrease. In this case, health in developing countries could become worse, causing a negative spiral of poverty, declining health, and degraded ecosystems. The increased population in the South, combined with static or deteriorating nutrition, could force increased contact between humans and nonagricultural ecosystems, especially to obtain bushmeat and other forest goods. This could lead to more outbreaks of hemorrhagic fever and zoo-noses. It is possible, though with low probability, that a more chronic disease could cross from a non-domesticated animal species into humans, at first slowly but then more rapidly colonizing human populations. Each scenario yields a different package of gains, losses, and vulnerabilities to components of human well-being in different regions and populations (S.SDM). Actions that focus on improving the lives of the poor by reducing barriers to international flows of goods, services, and capital tend to lead to the most improvement in health and social relations for the currently most disadvantaged people. But human vulnerability to ecological surprises is high. Globally integrated approaches that focus on technology and property rights for ecosystem services generally improve human well-being in terms of health, security, social relations, and material needs. If the same technologies are used globally, however, local culture can be lost or undervalued. High levels of trade lead to more rapid spread of emergent diseases, somewhat reducing the gains in health in all areas. Locally focused, learning-based approaches lead to the largest improvements in social relations. Order from Strength, which focuses on reactive policies in a regionalized world, has the least favorable outcomes for human well-being, as the global distribution of ecosystem services and human resources that underpin human well-being are increasingly skewed. (See Figure 5.5.) Wealthy populations generally meet most material needs but experience psychological unease. Anxiety, depression, obesity, and diabetes have a greater impact on otherwise privileged populations in this scenario. Disease creates a heavy burden for disadvantaged populations. Proactive or anticipatory management of ecosystems is generally advantageous in the MA scenarios, but it is particularly beneficial under conditions of changing or novel conditions (S.SDM). (See Table 5.4.) Ecological surprises are inevitable because of the complexity of the interactions and because of limitations in current understanding of the dynamic properties of ecosystems. Currently well understood phenomena that were surprises of the past century include the ability of pests to evolve resistance to biocides, the contribution to desertification of certain types of land use, biomagnification of toxins, and the increase in vulnerability of ecosystem to eutrophication and unwanted species due to removal of predators. While we do not know which surprises lie ahead in the next 50 years, we can be certain that there will be some. In general, proactive action to manage systems sustainably and to build resilience into systems will be advantageous, particularly when conditions are changing rapidly, when surprise events are likely, or when uncertainty is high. This approach is beneficial largely because the restoration of ecosystems or ecosystem services following their degradation or collapse is generally more costly and time-consuming than preventing degradation, if that is possible at all. Nevertheless, there are costs and benefits to both proactive and reactive approaches, as Table 5.4 indicated. What can be learned about the consequences of ecosystem change for human well-being at sub-global scales? The MA included a sub-global assessment component to assess differences in the importance of ecosystem services for human well-being around the world (SG.SDM). The Sub-global Working Group included 33 assessments around the world. (See Figure 6.1.) These were designed to consider the importance of ecosystem services for human well-being at local, national, and regional scales. The areas covered in these assessments range from small villages in India and cities like Stockholm and São Paulo to whole countries like Portugal and large regions like southern Africa. In a few cases, the sub-global assessments were designed to cover multiple nested scales. For example, the Southern Africa study included assessments of the entire region of Africa south of the equator, of the Gariep and Zambezi river basins in that region, and of local communities within those basins. This nested design was included as part of the overall design of the MA to analyze the importance of scale on ecosystem services and human well-being and to study cross-scale interactions. Most assessments, however, were conducted with a focus on the needs of users at a single spatial scale—a particular community, watershed, or region. The scale at which an assessment is undertaken significantly influences the problem definition and the assessment results (SG.SDM). Findings of assessments done at different scales varied due to the specific questions posed or the information analyzed. Local communities are influenced by global, regional, and local factors. Global factors include commodity prices (global trade asymmetries that influence local production patterns, for instance) and global climate change (such as sea level rise). Regional factors include water supply regimes (safe piped water in rural areas), regional climate (desertification), and geomorphological processes (soil erosion and degradation). Local factors include market access (distance to market), disease prevalence (malaria, for example), or localized climate variability (patchy thunderstorms). Assessments conducted at different scales tended to focus on drivers and impacts most relevant at each scale, yielding different but complementary findings. This provides some of the benefit of a multi-scale assessment process, since each component assessment provides a different perspective on the issues addressed. Although there is overall congruence in the results from global and sub-global assessments for services like water and biodiversity, there are examples where local assessments showed the condition was either better or worse than expected from the global assessment (SG.SDM). For example, the condition of water resources was significantly worse than expected in places like São Paulo and the Laguna Lake Basin in the Philippines. There were more mismatches for biodiversity than for water provisioning because the concepts and measures of biodiversity were more diverse in the sub-global assessments. Drivers of change act in very distinct ways in different regions (SG7.ES). Though similar drivers might be present in various assessments, their interactions—and thus the processes leading to ecosystem change—differed significantly from one assessment to another. For example, although the Amazon, Central Africa, and Southeast Asia in the Tropical Forest Margins assessment have the same set of individual drivers of land use change (deforestation, road construction, and pasture creation), the interactions among these drivers leading to change differ. Deforestation driven by swidden agriculture is more widespread in upland and foothill zones of Southeast Asia than in other regions. Road construction by the state followed by colonizing migrant settlers, who in turn practice slash-and-burn agriculture, is most frequent in lowland areas of Latin America, especially in the Amazon Basin. Pasture creation for cattle ranching is causing deforestation almost exclusively in the humid lowland regions of mainland South America. The spontaneous expansion of small-holder agriculture and fuelwood extraction for domestic uses are important causes of deforestation in Africa. The assessments identified inequities in the distribution of the costs and benefits of ecosystem change, which are often displaced to other places or future generations (SG.SDM). For example, the increase in urbanization in countries like Portugal is generating pressures on ecosystems and services in rural areas. The increase in international trade is also generating additional pressures around the world, illustrated by the cases of the mining industries in Chile and Papua New Guinea. In some situations, the costs of transforming ecosystems are simply deferred to future generations. An example reported widely across sub-global assessments in different parts of the world is tropical deforestation, which caters to current needs but leads to a reduced capacity to supply services in the future. Declining ecosystem trends have sometimes been mitigated by innovative local responses. The “threats” observed at an aggregated, global level may be both overestimated and underestimated from a sub-global perspective (SG.SDM). Assessments at an aggregated level often fail to take into account the adaptive capacity of sub-global actors. Through collaboration in social networks, actors can develop new institutions and reorganize to mitigate declining conditions. On the other hand, sub- global actors tend to neglect drivers that are beyond their reach of immediate influence when they craft responses. Hence, it is crucial for decision-makers to develop institutions at the global, regional, and national levels that strengthen the adaptive capacity of actors at the sub-national and local levels to develop context-specific responses that do address the full range of relevant drivers. The Biodiversity Management Committees in India are a good example of a national institution that enables local actors to respond to biodiversity loss. This means neither centralization nor decentralization but institutions at multiple levels that enhance the adaptive capacity and effectiveness of sub-national and local responses. Multi-scale assessments offer insights and results that would otherwise be missed (SG.SDM). The variability among sub- global assessments in problem definition, objectives, scale criteria, and systems of explanation increased at finer scales of assessment (for example, social equity issues became more visible from coarser to finer scales of assessment). The role of biodiversity as a risk avoidance mechanism for local communities is frequently hidden until local assessments are conducted (as in the Indian local, Sinai, and Southern African livelihoods studies). Failure to acknowledge that stakeholders at different scales perceive different values in various ecosystem services can lead to unworkable and inequitable policies or programs at all scales (SGWG). Ecosystem services that are of considerable importance at global scales, such as carbon sequestration or waste regulation, are not necessarily seen to be of value locally. Similarly, services of local importance, such as the cultural benefits of ecosystems, the availability of manure for fuel and fertilizer, or the presence of non-wood forest products, are often not seen as important globally. Responses designed to achieve goals related to global or regional concerns are likely to fail unless they take into account the different values and concerns motivating local communities. There is evidence that including multiple knowledge systems increases the relevance, credibility, and legitimacy of the assessment results for some users (SG.SDM). For example, in Bajo Chirripó in Costa Rica, the involvement of non-scientists added legitimacy and relevance to assessment results for a number of potential users at the local level. In many of the sub-global assessments, however, local resource users were one among many groups of decision-makers, so the question of legitimacy needs to be taken together with that of empowerment. Integrated assessments of ecosystems and human well-being need to be adapted to the specific needs and characteristics of the groups undertaking the assessment (SG.SDM, SG11.ES). Assessments are most useful to decision-makers if they respond to the needs of those individuals. As a result, the MA sub-global assessments differed significantly in the issues they addressed. At the same time, given the diversity of assessments involved in the MA, the basic approach had to be adapted by different assessments to ensure its relevance to different user groups. (See Box 6.1.) Several community-based assessments adapted the MA framework to allow for more dynamic interplays between variables, to capture fine-grained patterns and processes in complex systems, and to leave room for a more spiritual world-view. In Peru and Costa Rica, for example, other conceptual frameworks were used that incorporated both the MA principles and local cosmologies. In southern Africa, various frameworks were used in parallel to offset the shortcomings of the MA framework for community assessments. These modifications and adaptations of the framework are an important outcome of the MA. Box 6.1 Local Adaptations of MA Conceptual Framework (SG.SDM) The MA framework was applied in a wide range of assessments at multiple scales. Particularly for the more local assessments, the framework needed to be adapted to better reflect the needs and concerns of local communities. In the case of an assessment conducted by and for indigenous communities in the Vilcanota region of Peru, the framework had to be recreated from a base with the Quechua understanding of ecological and social relationships. (See Figure.) Within the Quechua vision of the cosmos, concepts such as reciprocity (Ayni), the inseparability of space and time, and the cyclical nature of all processes (Pachakuti) are important components of the Inca definition of ecosystems. Love (Munay) and working (Llankay) bring humans to a higher state of knowledge (Yachay) about their surroundings and are therefore key concepts linking Quechua communities to the natural world. Ayllu represents the governing institutions that regulate interactions between all living beings. The resulting framework has similarities with the MA Conceptual Framework, but the divergent features are considered to be important to the Quechua people conducting the assessment. The Vilcanota conceptual framework also includes multiple scales (Kaypacha, Hananpacha, Ukupacha); however, these represent both spatial scales and the cyclical relationship between the past, present, and future. Inherent in this concept of space and time is the adaptive capacity of the Quechua people, who welcome change and have become resilient to it through an adaptive learning process. (It is recognized that current rates of change may prove challenging to the adaptive capacities of the communities.) The cross shape of the Vilcanota framework diagram represents the “Chakana,” the most recognized and sacred shape to Quechua people, and orders the world through deliberative and collective decision-making that emphasizes reciprocity (Ayni). Pachamama is similar to a combination of the “ecosystem goods and services” and “human well-being” components of the MA framework. Pachakuti is similar to the MA “drivers” (both direct and indirect). Ayllu (and Munay, Yachay, and Llankay) may be seen as responses and are more organically integrated into the cyclic process of change and adaptation. In the Vilcanota assessment, the Quechua communities directed their work process to assess the conditions and trends of certain aspects of the Pachamama (focusing on water, soil, and agrobiodiversity), how these goods and services are changing, the reasons behind the changes, the effects on the other elements of the Pachamama, how the communities have adapted and are adapting to the changes, and the state of resilience of the Quechua principles and institutions for dealing with these changes in the future. Developing the local conceptual framework from a base of local concepts and principles, as opposed to simply translating the MA framework into local terms, has allowed local communities to take ownership of their assessment process and given them the power both to assess the local environment and human populations using their own knowledge and principles of well-being and to seek responses to problems within their own cultural and spiritual institutions. What is known about time scales, inertia, and the risk of nonlinear changes in ecosystems? The time scale of change refers to the time required for the effects of a perturbation of a process to be expressed. Time scales relevant to ecosystems and their services are shown in Figure 7.1. Inertia refers to the delay or slowness in the response of a system to factors altering their rate of change, including continuation of change in the system after the cause of that change has been removed. Resilience refers to the amount of disturbance or stress that a system can absorb and still remain capable of returning to its pre-disturbance state. Time Scales and Inertia Many impacts of humans on ecosystems (both harmful and beneficial) are slow to become apparent; this can result in the costs associated with ecosystem changes being deferred to future generations. For example, excessive phosphorus is accumulating in many agricultural soils, threatening rivers, lakes, and coastal oceans with increased eutrophication. Yet it may take years or decades for the full impact of the phosphorus to become apparent through erosion and other processes (S7.3.2). Similarly, the use of groundwater supplies can exceed the recharge rate for some time before costs of extraction begin to grow significantly. In general, people manage ecosystems in a manner that increases short-term benefits; they may not be aware of, or may ignore, costs that are not readily and immediately apparent. This has the inequitable result of increasing current benefits at costs to future generations. Different categories of ecosystem services tend to change over different time scales, making it difficult for managers to evaluate trade-offs fully. For example, supporting services such as soil formation and primary production and regulating services such as water and disease regulation tend to change over much longer time scales than provisioning services. As a consequence, impacts on more slowly changing supporting and regulating services are often overlooked by managers in pursuit of increased use of provisioning services (S12.ES). The inertia of various direct and indirect drivers differs considerably, and this strongly influences the time frame for solving ecosystem-related problems once they are identified (RWG, S7). For some drivers, such as the over-harvest of particular species, lag times are rather short, and the impact of the driver can be minimized or halted within short time frames. For others, such as nutrient loading and, especially, climate change, lag times are much longer, and the impact of the driver cannot be lessened for years or decades. Significant inertia exists in the process of species extinctions that result from habitat loss; even if habitat loss were to end today, it would take hundreds of years for species numbers to reach a new and lower equilibrium due to the habitat changes that have taken place in the last centuries (S10). Most species that will go extinct in the next several centuries will be driven to extinction as a result of loss or degradation of their habitat (either through land cover changes or increasingly through climate changes). Habitat loss can lead to rapid extinction of some species (such as those with extremely limited ranges); but for many species, extinction will only occur after many generations, and long-lived species such as some trees could persist for centuries before ultimately going extinct. This “extinction debt” has important implications. First, while reductions in the rate of habitat loss will protect certain species and have significant long-term benefits for species survival in the aggregate, the impact on rates of extinction over the next 10–50 years is likely to be small (medium certainty). Second, until a species does go extinct, opportunities exist for it to be recovered to a viable population size. Nonlinear Changes in Ecosystems Nonlinear changes, including accelerating, abrupt, and potentially irreversible changes, have been commonly encountered in ecosystems and their services. Most of the time, change in ecosystems and their services is gradual and incremental. Most of these gradual changes are detectable and predictable, at least in principle (high certainty) (S.SDM). However, many examples exist of nonlinear and sometimes abrupt changes in ecosystems. In these cases, the ecosystem may change gradually until a particular pressure on it reaches a threshold, at which point changes occur relatively rapidly as the system shifts to a new state. Some of these nonlinear changes can be very large in magnitude and have substantial impacts on human well-being. Capabilities for predicting some nonlinear changes are improving, but for most ecosystems and for most potential nonlinear changes, while science can often warn of increased risks of change, it cannot predict the thresholds where the change will be encountered (C6.2, S13.4). Numerous examples exist of nonlinear and relatively abrupt changes in ecosystems: - Disease emergence (S13.4): Infectious diseases regularly exhibit nonlinear behavior. If, on average, each infected person infects at least one other person, then an epidemic spreads, while if the infection is transferred on average to less than one person the epidemic dies out. High human population densities in close contact with animal reservoirs of infectious disease facilitate rapid exchange of pathogens, and if the threshold rate of infection is achieved—that is, if each infected person on average transmits the infection to at least one other person—the resulting infectious agents can spread quickly through a worldwide contiguous, highly mobile, human population with few barriers to transmission. The almost instantaneous outbreak of SARS in different parts of the world is an example of such potential, although rapid and effective action contained its spread. During the 1997/98 El Niño, excessive flooding caused cholera epidemics in Djibouti, Somalia, Kenya, Tanzania, and Mozambique. Warming of the African Great Lakes due to climate change may create conditions that increase the risk of cholera transmission in surrounding countries (C14.2.1). An event similar to the 1918 Spanish flu pandemic, which is thought to have killed 20–40 million people worldwide, could now result in over 100 million deaths within a single year. Such a catastrophic event, the possibility of which is being seriously considered by the epidemiological community, would probably lead to severe economic disruption and possibly even rapid collapse in a world economy dependent on fast global exchange of goods and services. - Algal blooms and fish kills (S13.4): Excessive nutrient loading fertilizes freshwater and coastal ecosystems. While small increases in nutrient loading often cause little change in many ecosystems, once a threshold of nutrient loading is achieved, the changes can be abrupt and extensive, creating harmful algal blooms (including blooms of toxic species) and often leading to the domination of the ecosystem by one or a few species. Severe nutrient overloading can lead to the formation of oxygen-depleted zones, killing all animal life. - Fisheries collapses (C18): Fish population collapses have been commonly encountered in both freshwater and marine fisheries. Fish populations are generally able to withstand some level of catch with a relatively small impact on their overall population size. As the catch increases, however, a threshold is reached after which too few adults remain to produce enough offspring to support that level of harvest, and the population may drop abruptly to a much smaller size. For example, the Atlantic cod stocks of the east coast of Newfoundland collapsed in 1992, forcing the closure of the fishery after hundreds of years of exploitation, as shown in Figure 3.4 (CF2 Box 2.4). Most important, the stocks may take years to recover or not recover at all, even if harvesting is significantly reduced or eliminated entirely. - Species introductions and losses: Introductions (or removal) of species can cause nonlinear changes in ecosystems and their services. For example, the introduction of the zebra mussel (see photo above) into U.S. aquatic systems resulted in the extirpation of native clams in Lake St. Clair, large changes in energy flow and ecosystem function, and annual costs of $100 million to the power industry and other users (S12.4.8). The introduction of the comb jelly fish (Mnemiopsis leidyi) in the Black Sea caused the loss of 26 major fisheries species and has been implicated (along with other factors) in subsequent growth of the anoxic “dead zone” (C28.5). The loss of the sea otters from many coastal ecosystems on the Pacific Coast of North America due to hunting led to the booming populations of sea urchins (a prey species for otters) which in turn led to the loss of kelp forests (which are eaten by urchins). - Changes in dominant species in coral ecosystems: Some coral reef ecosystems have undergone sudden shifts from coral-dominated to algae-dominated reefs. The trigger for such phase shifts, which are essentially irreversible, is usually multifaceted and includes increased nutrient input leading to eutrophic conditions, and removal of herbivorous fishes that maintain the balance between corals and algae. Once a threshold is reached, the change in the ecosystem takes place within months and the resulting ecosystem, although stable, is less productive and less diverse. One well-studied example is the sudden switch in 1983 from coral to algal domination of Jamaican reef systems. This followed several centuries of overfishing of herbivores, which left the control of algal cover almost entirely dependent on a single species of sea urchin, whose populations collapsed when exposed to a species-specific pathogen. As a result, Jamaica’s reefs shifted (apparently irreversibly) to a new low-diversity, algae-dominated state with very limited capacity to support fisheries (C4.6). - Regional climate change (C13.3): The vegetation in a region influences climate through albedo (reflectance of radiation from the surface), transpiration (flux of water from the ground to the atmosphere through plants), and the aerodynamic properties of the surface. In the Sahel region of North Africa, vegetation cover is almost completely controlled by rainfall. When vegetation is present, rainfall is quickly recycled, generally increasing precipitation and, in turn, leading to a denser vegetation canopy. Model results suggest that land degradation leads to a substantial reduction in water recycling and may have contributed to the observed trend in rainfall reduction in the region over the last 30 years. In tropical regions, deforestation generally leads to decreased rainfall. Since forest existence crucially depends on rainfall, the relationship between tropical forests and precipitation forms a positive feedback that, under certain conditions, theoretically leads to the existence of two steady states: rainforest and savanna (although some models suggest only one stable climate-vegetation state in the Amazon). There is established but incomplete evidence that changes being made in ecosystems are increasing the likelihood of non- linear and potentially high-impact, abrupt changes in physical and biological systems that have important consequences for human well-being (C6, S3, S13.4, S.SDM). The increased likelihood of these events stems from the following factors: - On balance, changes humans are making to ecosystems are reducing the resilience of the ecological components of the systems (established but incomplete) (C6, S3, S12). Genetic and species diversity, as well as spatial patterns of landscapes, environmental fluctuations, and temporal cycles with which species evolved, generate the resilience of ecosystems. Functional groups of species contribute to ecosystem processes and services in similar ways. Diversity among functional groups increases the flux of ecosystem processes and services (established but incomplete). Within functional groups, species respond differently to environmental fluctuations. This response diversity derives from variation in the response of species to environmental drivers, heterogeneity in species distributions, differences in ways that species use seasonal cycles or disturbance patterns, or other mechanisms. Response diversity enables ecosystems to adjust in changing environments, altering biotic structure in ways that maintain processes and services (high certainty) (S.SDM). The loss of biodiversity that is now taking place thus tends to reduce the resilience of ecosystems. - There are growing pressures from various drivers (S7, SG7.5). Threshold changes in ecosystems are not uncommon, but they are infrequently encountered in the absence of human-caused pressures on ecosystems. Many of these pressures are now growing. Increased fish harvests raise the likelihood of fisheries collapses; higher rates of climate change boost the potential for species extinctions; increased introductions of nitrogen and phosphorus into the environment make the eutrophication of aquatic ecosystems more likely; as human populations become more mobile, more and more species are being introduced into new habitats, and this increases the chance of harmful pests emerging in those regions. The growing bushmeat trade poses particularly significant threats associated with nonlinear changes, in this case accelerating rates of change (C8.3, S.SDM, C14). Growth in the use and trade of bushmeat is placing increasing pressure on many species, particularly in Africa and Asia. While population size of harvested species may decline gradually with increasing harvest for some time, once the harvest exceeds sustainable levels, the rate of decline of populations of the harvested species will tend to accelerate. This could place them at risk of extinction and also reduce the food supply of the people dependent on these resources. Finally, the bushmeat trade involves relatively high levels of interaction between humans and some relatively closely related wild animals that are eaten. Again, this increases the risk of a nonlinear change, in this case the emergence of new and serious pathogens. Given the speed and magnitude of international travel today, new pathogens could spread rapidly around the world. A potential nonlinear response, currently the subject of intensive scientific research, is the atmospheric capacity to cleanse itself of air pollution (in particular, hydrocarbons and reactive nitrogen compounds) (C.SDM). This capacity depends on chemical reactions involving the hydroxyl radical, the atmospheric concentration of which has declined by about 10% (medium certainty) since preindustrial times. Once an ecosystem has undergone a nonlinear change, recovery to the original state may take decades or centuries and may sometimes be impossible. For example, the recovery of overexploited fisheries that have been closed to fishing is quite variable. Although the cod fishery in Newfoundland has been closed for 13 years (except for a small inshore fishery between 1998 and 2003), there have been few signs of a recovery, and many scientists are not optimistic about its return in the foreseeable future (C18.2.6). On the other hand, the North Sea Herring fishery collapsed due to over-harvesting in the late 1970s, but it recovered after being closed for four years (C18). What options exist to manage ecosystems sustainably? It is a major challenge to reverse the degradation of ecosystems while meeting increasing demands for their services. But this challenge can be met. Three of the four MA scenarios show that changes in policies, institutions, and practices can mitigate some of the negative consequences of growing pressures on ecosystems, although the changes required are large and not currently under way (S.SDM). As noted in Key Question 5, in three of the four MA scenarios at least one of the three categories of provisioning, regulating, and cultural services is in better condition in 2050 than in 2000, although biodiversity loss continues at high rates in all scenarios. The scale of interventions that results in these positive outcomes, however, is very significant. The interventions include major investments in environmentally sound technology, active adaptive management, proactive actions to address environmental problems before their full consequences are experienced, major investments in public goods (such as education and health), strong action to reduce socioeconomic disparities and eliminate poverty, and expanded capacity of people to manage ecosystems adaptively. More specifically, in Global Orchestration trade barriers are eliminated, distorting subsidies are removed, and a major emphasis is placed on eliminating poverty and hunger. In Adapting Mosaic, by 2010 most countries are spending close to 13% of their GDP on education (compared with an average of 3.5% in 2000), and institutional arrangements to promote transfer of skills and knowledge among regional groups proliferate. In TechnoGarden, policies are put in place to provide payment to individuals and companies that provide or maintain the provision of ecosystem services. For example, in this scenario, by 2015 roughly 50% of European agriculture and 10% of North American agriculture is aimed at balancing the production of food with the production of other ecosystem services. Under this scenario, significant advances occur in the development of environmental technologies to increase production of services, create substitutes, and reduce harmful trade-offs. Past actions to slow or reverse the degradation of ecosystems have yielded significant benefits, but these improvements have generally not kept pace with growing pressures and demands. Although most ecosystem services assessed in the MA are being degraded, the extent of that degradation would have been much greater without responses implemented in past decades. For example, more than 100,000 protected areas (including strictly protected areas such as national parks as well as areas managed for the sustainable use of natural ecosystems, including timber harvest or wildlife harvest) covering about 11.7% of the terrestrial surface have now been established (R5.2.1). These play an important role in the conservation of biodiversity and ecosystem services, although important gaps in the distribution of protected areas remain, particularly in marine and freshwater systems. Technological advances have also helped to lessen the rate of growth in pressure on ecosystems caused per unit increase in demand for ecosystem services. For all developing countries, for instance, yields of wheat, rice, and maize rose between 109% and 208% in the past 40 years. Without this increase, far more habitat would have been converted to agriculture during this time. An effective set of responses to ensure the sustainable management of ecosystems must address the drivers presented in Key Question 4 and overcome barriers related to (RWG): - inappropriate institutional and governance arrangements, including the presence of corruption and weak systems of regulation and accountability; - market failures and the misalignment of economic incentives; - social and behavioral factors, including the lack of political and economic power of some groups (such as poor people, women, and indigenous groups) who are particularly dependent on ecosystem services or harmed by their degradation; - underinvestment in the development and diffusion of technologies that could increase the efficiency of use of ecosystem services and reduce the harmful impacts of various drivers of ecosystem change; and - insufficient knowledge (as well as the poor use of existing knowledge) concerning ecosystem services and management, policy, technological, behavioral and institutional responses that could enhance benefits from these services while conserving resources. All these barriers are compounded by weak human and institutional capacity related to the assessment and management of ecosystem services, underinvestment in the regulation and management of their use, lack of public awareness, and lack of awareness among decision-makers of the threats posed by the degradation of ecosystem services and the opportunities that more sustainable management of ecosystems could provide. The MA assessed 74 response options for ecosystem services, integrated ecosystem management, conservation and sustainable use of biodiversity, and climate change. (See Appendix B.) Many of these options hold significant promise for conserving or sustainably enhancing the supply of ecosystem services. Examples of promising responses that address the barriers just described are presented in the remainder of this section (RWG, R2). The stakeholder groups that would need to take decisions to implement each response are indicated as follows: G for government, B for business and industry, and N for nongovernmental organizations and other civil society organizations such as community-based and indigenous peoples organizations. Institutions and Governance Changes in institutional and environmental governance frameworks are sometimes required in order to create the enabling conditions for effective management of ecosystems, while in other cases existing institutions could meet these needs but face significant barriers. Many existing institutions at both the global and the national level have the mandate to address the degradation of ecosystem services but face a variety of challenges in doing so related to the need for greater cooperation across sectors and the need for coordinated responses at multiple scales. However, since a number of the issues identified in this assessment are recent concerns and were not specifically taken into account in the design of today’s institutions, changes in existing institutions and the development of new ones may sometimes be needed, particularly at the national scale. In particular, existing national and global institutions are not well designed to deal with the management of open access resources, a characteristic of many ecosystem services. Issues of ownership and access to resources, rights to participation in decision-making, and regulation of particular types of resource use or discharge of wastes can strongly influence the sustainability of ecosystem management and are fundamental determinants of who wins and who loses from changes in ecosystems. Corruption—a major obstacle to effective management of ecosystems— also stems from weak systems of regulation and accountability. Promising interventions include: - Integration of ecosystem management goals within other sectors and within broader development planning frameworks (G). The most important public policy decisions affecting ecosystems are often made by agencies and in policy arenas other than those charged with protecting ecosystems. Ecosystem management goals are more likely to be achieved if they are reflected in decisions in other sectors and in national development strategies. For example, the Poverty Reduction Strategies prepared by developing-country governments for the World Bank and other institutions strongly shape national development priorities, but in general these have not taken into account the importance of ecosystems to improving the basic human capabilities of the poorest (R17.ES). - Increased coordination among multilateral environmental agreements and between environmental agreements and other international economic and social institutions (G). International agreements are indispensable for addressing ecosystem-related concerns that span national boundaries, but numerous obstacles weaken their current effectiveness (R17.2). The limited, focused nature of the goals and mechanisms included in most bilateral and multilateral environmental treaties does not address the broader issue of ecosystem services and human well-being. Steps are now being taken to increase coordination among these treaties, and this could help broaden the focus of the array of instruments. However, coordination is also needed between the multilateral environmental agreements and the more politically powerful international legal institutions, such as economic and trade agreements, to ensure that they are not acting at cross-purposes (R.SDM). And implementation of these agreements also needs to be coordinated among relevant institutions and sectors at the national level. - Increased transparency and accountability of government and private-sector performance in decisions that affect ecosystems, including through greater involvement of concerned stakeholders in decision-making (G, B, N) (RWG, SG9). Laws, policies, institutions, and markets that have been shaped through public participation in decision-making are more likely to be effective and perceived as just. For example, degradation of freshwater and other ecosystem services generally have a disproportionate impact on those who are, in various ways, excluded from participation in the decision-making process (R7.2.3). Stakeholder participation also contributes to the decision-making process because it allows a better understanding of impacts and vulnerability, the distribution of costs and benefits associated with trade-offs, and the identification of a broader range of response options that are available in a specific context. And stakeholder involvement and transparency of decision-making can increase accountability and reduce corruption. - Development of institutions that devolve (or centralize) decision-making to meet management needs while ensuring effective coordination across scales (G, B, N) (RWG). Problems of ecosystem management have been exacerbated by both overly centralized and overly decentralized decision-making. For example, highly centralized forest management has proved ineffective in many countries, and efforts are now being made to move responsibility to lower levels of decision-making either within the natural resources sector or as part of broader decentralization of governmental responsibilities. At the same time, one of the most intractable problems of ecosystem management has been the lack of alignment between political boundaries and units appropriate for the management of ecosystem goods and services. Downstream communities may not have access to the institutions through which upstream actions can be influenced; alternatively, downstream communities or countries may be stronger politically than upstream regions and may dominate control of upstream areas without addressing upstream needs. A number of countries, however, are now strengthening regional institutions for the management of trans-boundary ecosystems (such as the Danube River, the Mekong River Commission, East African cooperation on Lake Victoria, and the Amazon Cooperation Treaty Organization). - Development of institutions to regulate interactions between markets and ecosystems (G) (RWG). The potential of policy and market reforms to improve ecosystem management are often constrained by weak or absent institutions. For example, the potential of the Clean Development Mechanism established under the Framework Convention on Climate Change to provide financial support to developing countries in return for greenhouse gas reductions, which would realize climate and biodiversity benefits through payments for carbon sequestration in forests, is constrained by unclear property rights, concerns over the permanence of reductions, and lack of mechanisms for resolving conflicts. Moreover, existing regulatory institutions often do not have ecosystem protection as a clear mandate. For example, independent regulators of privatized water systems and power systems do not necessarily promote resource use efficiency and renewable supply. There is a continuing importance of the role of the state to set and enforce rules even in the context of privatization and market-led growth. - Development of institutional frameworks that promote a shift from highly sectoral resource management approaches to more integrated approaches (G, B) (R15.ES, R12.ES, R11.ES). In most countries, separate ministries are in charge of different aspects of ecosystems (such as ministries of environment, agriculture, water, and forests) and different drivers of change (such as ministries of energy, transportation, development, and trade). Each of these ministries has control over different aspects of ecosystem management. As a result, there is seldom the political will to develop effective ecosystem management strategies, and competition among the ministries can often result in policy choices that are detrimental to ecosystems. Integrated responses intentionally and actively address ecosystem services and human well-being simultaneously, such as integrated coastal zone management, integrated river basin management, and national sustainable development strategies. Although the potential for integrated responses is high, numerous barriers have limited their effectiveness: they are resource-intensive, but the potential benefits can exceed the costs; they require multiple instruments for their implementation; and they require new institutional and governance structures, skills, knowledge, and capacity. Thus far, the results of implementation of integrated responses have been mixed in terms of ecological, social, and economic impacts. Economics and Incentives Economic and financial interventions provide powerful instruments to regulate the use of ecosystem goods and services (C5 Box 5.2). Because many ecosystem services are not traded in markets, markets fail to provide appropriate signals that might otherwise contribute to the efficient allocation and sustainable use of the services. Even if people are aware of the services provided by an ecosystem, they are neither compensated for providing these services nor penalized for reducing them. In addition, the people harmed by the degradation of ecosystem services are often not the ones who benefit from the actions leading to their degradation, and so those costs are not factored into management decisions. A wide range of opportunities exists to influence human behavior to address this challenge in the form of economic and financial instruments. Some of them establish markets; others work through the monetary and financial interests of the targeted social actors; still others affect relative prices. Market mechanisms can only work if supporting institutions are in place, and thus there is a need to build institutional capacity to enable more widespread use of these mechanisms (R17). The adoption of economic instruments usually requires a legal framework, and in many cases the choice of a viable and effective economic intervention mechanism is determined by the socioeconomic context. For example, resource taxes can be a powerful instrument to guard against the overexploitation of an ecosystem service, but an effective tax scheme requires well-established and reliable monitoring and tax collection systems. Similarly, subsidies can be effective to introduce and implement certain technologies or management procedures, but they are inappropriate in settings that lack the transparency and accountability needed to prevent corruption. The establishment of market mechanisms also often involves explicit decisions about wealth distribution and resource allocation, when, for example, decisions are made to establish private property rights for resources that were formerly considered common pool resources. For that reason, the inappropriate use of market mechanisms can further exacerbate problems of poverty. Promising interventions include: - Elimination of subsidies that promote excessive use of ecosystem services (and, where possible, transfer of these subsidies to payments for non-marketed ecosystem services) (G) (S7.ES). Subsidies paid to the agricultural sectors of OECD countries between 2001 and 2003 averaged over $324 billion annually, or one third the global value of agricultural products in 2000. Many countries outside the OECD also have inappropriate subsidies. A significant proportion of this total involves production subsidies that lead to greater food production in countries with subsidies than the global market conditions warrant, that promote the overuse of water, fertilizers, and pesticides, and that reduce the profitability of agriculture in developing countries. They also increase land values, adding to landowners’ resistance to subsidy reductions. On the social side, agricultural subsidies make farmers overly dependent on taxpayers for their livelihood, change wealth distribution and social composition by benefiting large corporate farms to the detriment of smaller family farms, and contribute to the dependence of large segments of the developing world on aid. Finally, it is not clear that these policies achieve one of their primary targets—supporting farmers’ income. Only about a quarter of the total expenses in price supports translate into additional income for farm households. Similar problems are created by fishery subsidies, which for the OECD countries were estimated at $6.2 billion in 2002, or about 20% of the gross value of production that year (C8.4.1). Subsidies on fisheries, apart from their distributional impacts, affect the management of resources and their sustainable use by encouraging overexploitation of the resource, thereby worsening the common property problem present in fisheries. Although some indirect subsidies, such as payments for the withdrawal of individual transferable harvest quotas, could have a positive impact on fisheries management, the majority of subsidies have a negative effect. Inappropriate subsidies are also common in sectors such as water and forestry. Although removal of production subsidies would produce net benefits, it would not occur without costs. The farmers and fishers benefiting directly from the subsidies would suffer the most immediate losses, but there would also be indirect effects on ecosystems both locally and globally. In some cases it may be possible to transfer production subsides to other activities that promote ecosystem stewardship, such as payment for the provision or enhancement of regulatory or supporting services. Compensatory mechanisms may be needed for the poor who are adversely affected by the immediate removal of subsidies (R17.5). Reduced subsidies within the OECD may lessen pressures on some ecosystems in those countries, but they could lead to more rapid conversion and intensification of land for agriculture in developing countries and would thus need to be accompanied by policies to minimize the adverse impacts on ecosystems there. - Greater use of economic instruments and market-based approaches in the management of ecosystem services (G, B, N) (RWG). Economic instruments and market mechanisms with the potential to enhance the management of ecosystem services include: - Taxes or user fees for activities with “external” costs (trade-offs not accounted for in the market). These instruments create an incentive that lessens the external costs and provides revenues that can help protect the damaged ecosystem services. Examples include taxes on excessive application of nutrients or ecotourism user fees. - Creation of markets, including through cap-and-trade systems. Ecosystem services that have been treated as “free” resources, as is often the case for water, tend to be used wastefully. The establishment of markets for the services can both increase the incentives for their conservation and increase the economic efficiency of their allocation if supporting legal and economic institutions are in place. However, as noted earlier, while markets will increase the efficiency of the use of the resource, they can have harmful effects on particular groups of users who may inequitably affected by the change (R17). The combination of regulated emission caps, coupled with market mechanisms for trading pollution rights, often provides an efficient means of reducing emissions harmful to ecosystems. For example, nutrient trading systems may be a low-cost way to reduce water pollution in the United States (R7 Box 7.3). One of the most rapidly growing markets related to ecosystem services is the carbon market. (See Figure 8.1.) Approximately 64 million tons of carbon dioxide equivalent were exchanged through projects from January to May 2004, nearly as much as during all of 2003 (78 million tons) (C5 Box 5.2). The value of carbon dioxide trades in 2003 was approximately $300 million. About one quarter of the trades (by volume of CO2 equivalents) involve investment in ecosystem services (hydropower or biomass). The World Bank has established a fund with a capital of $33.3 million (as of January 2005) to invest in afforestation and reforestation projects that sequester or conserve carbon in forest and agroecosystems while promoting biodiversity conservation and poverty alleviation. It is speculated that the value of the global carbon emissions trading markets may reach $10 billion to $44 billion in 2010 (and involve trades totaling 4.5 billion tons of carbon dioxide or equivalent). - Payment for ecosystem services. Mechanisms can be established to enable individuals, firms, or the public sector to pay resource owners to provide particular services. For example, in New South Wales, Australia, associations of farmers purchase salinity credits from the State Forests Agency, which in turn contracts with upstream landholders to plant trees, which reduce water tables and store carbon. Similarly, in 1996 Costa Rica established a nationwide system of conservation payments to induce landowners to provide ecosystem services. Under this program, the government brokers contracts between international and domestic “buyers” and local “sellers” of sequestered carbon, biodiversity, watershed services, and scenic beauty. By 2001, more than 280,000 hectares of forests had been incorporated into the program at a cost of about $30 million, with pending applications covering an additional 800,000 hectares (C5 Box 5.2). Other innovative conservation financing mechanisms include “biodiversity offsets” (whereby developers pay for conservation activities as compensation for unavoidable harm that a project causes to biodiversity). An online news site, the Ecosystem Marketplace, has now been established by a consortium of institutions to provide information on the development of markets for ecosystem services and the payments for them. - Mechanisms to enable consumer preferences to be expressed through markets. Consumer pressure may provide an alternative way to influence producers to adopt more sustainable production practices in the absence of effective government regulation. For example, certification schemes that exist for sustainable fisheries and forest practices provide people with the opportunity to promote sustainability through their consumer choices. Within the forest sector, forest certification has become widespread in many countries and forest conditions; thus far, however, most certified forests are in temperate regions, managed by large companies that export to northern retailers (R8). - Taxes or user fees for activities with “external” costs (trade-offs not accounted for in the market). These instruments create an incentive that lessens the external costs and provides revenues that can help protect the damaged ecosystem services. Examples include taxes on excessive application of nutrients or ecotourism user fees. Social and Behavioral Responses Social and behavioral responses—including population policy; public education; empowerment of communities, women, and youth; and civil society actions—can be instrumental in responding to ecosystem degradation. These are generally interventions that stakeholders initiate and execute through exercising their procedural or democratic rights in efforts to improve ecosystems and human well-being. Promising interventions include: - Measures to reduce aggregate consumption of unsustainably managed ecosystem services (G, B, N) (RWG). The choices about what individuals consume and how much they consume are influenced not just by considerations of price but also by behavioral factors related to culture, ethics, and values. Behavioral changes that could reduce demand for degraded ecosystem services can be encouraged through actions by governments (such as education and public awareness programs or the promotion of demand-side management), industry (such as improved product labeling or commitments to use raw materials from sources certified as sustainable), and civil society (such as public awareness campaigns). Efforts to reduce aggregate consumption, however, must sometimes incorporate measures to increase the access to and consumption of those same ecosystem services by specific groups such as poor people. - Communication and education (G, B, N) (RWG, R5). Improved communication and education are essential to achieve the objectives of the environmental conventions, the Johannesburg Plan of Implementation, and the sustainable management of natural resources more generally. Both the public and decision-makers can benefit from education concerning ecosystems and human well-being, but education more generally provides tremendous social benefits that can help address many drivers of ecosystem degradation. Barriers to the effective use of communication and education include a failure to use research and apply modern theories of learning and change. While the importance of communication and education is well recognized, providing the human and financial resources to undertake effective work is a continuing barrier. - Empowerment of groups particularly dependent on ecosystem services or affected by their degradation, including women, indigenous people, and young people (G, B, N) (RWG). Despite women’s knowledge about the environment and the potential they possess, their participation in decision-making has often been restricted by social and cultural structures. Young people are key stakeholders in that they will experience the longer-term consequences of decisions made today concerning ecosystem services. Indigenous control of traditional homelands can sometimes have environmental benefits, although the primary justification continues to be based on human and cultural rights. Given the growing demands for ecosystem services and other increased pressures on ecosystems, the development and diffusion of technologies designed to increase the efficiency of resource use or reduce the impacts of drivers such as climate change and nutrient loading are essential. Technological change has been essential for meeting growing demands for some ecosystem services, and technology holds considerable promise to help meet future growth in demand. Technologies already exist for reducing nutrient pollution at reasonable costs—including technologies to reduce point source emissions, changes in crop management practices, and precision farming techniques to help control the application of fertilizers to a field, for example—but new policies are needed for these tools to be applied on a sufficient scale to slow and ultimately reverse the increase in nutrient loading (recognizing that this global goal must be achieved even while increasing nutrient applications in some regions such as sub-Saharan Africa). Many negative impacts on ecosystems and human well-being have resulted from these technological changes, however (R17.ES). The cost of “retrofitting” technologies once their negative consequences become apparent can be extremely high, so careful assessment is needed prior to the introduction of new technologies. Promising interventions include: - Promotion of technologies that increase crop yields without any harmful impacts related to water, nutrient, and pesticide use (G, B, N) (R6). Agricultural expansion will continue to be one of the major drivers of biodiversity loss well into the twenty-first century. Development, assessment, and diffusion of technologies that could increase the production of food per unit area sustainably without harmful trade-offs related to excessive use of water, nutrients, or pesticides would significantly lessen pressure on other ecosystem services. Without the intensification that has taken place since 1950, a further 20 million square kilometers of land would have had to be brought into production to achieve today’s crop production (C.SDM). The challenge for the future is to similarly reduce the pressure for expansion of agriculture without simultaneously increasing pressures on ecosystem services due to water use, excessive nutrient loading, and pesticide use. - Restoration of ecosystem services (G, B, N) (RWG, R7.4). Ecosystem restoration activities are now common in many countries and include actions to restore almost all types of ecosystems, including wetlands, forests, grasslands, estuaries, coral reefs, and mangroves. Ecosystems with some features of the ones that were present before conversion can often be established and can provide some of the original ecosystem services (such as pollution filtration in wetlands or timber production from forests). The restored systems seldom fully replace the original systems, but they still help meet needs for particular services. Yet the cost of restoration is generally extremely high in relation to the cost of preventing the degradation of the ecosystem. Not all services can be restored, and those that are heavily degraded may require considerable time for restoration. - Promotion of technologies to increase energy efficiency and reduce greenhouse gas emissions (G, B) (R13). Significant reductions in net greenhouse gas emissions are technically feasible due to an extensive array of technologies in the energy supply, energy demand, and waste management sectors. Reducing projected emissions will require a portfolio of energy production technologies ranging from fuel switching (coal/oil to gas) and increased power plant efficiency to increased use of renewable energy technologies, complemented by more efficient use of energy in the transportation, buildings, and industry sectors. It will also involve the development and implementation of supporting institutions and policies to overcome barriers to the diffusion of these technologies into the marketplace, increased public and private-sector funding for research and development, and effective technology transfer. Knowledge and Cognitive Responses Effective management of ecosystems is constrained both by a lack of knowledge and information concerning different aspects of ecosystems and by the failure to use adequately the information that does exist in support of management decisions. Although sufficient information exists to take many actions that could help conserve ecosystems and enhance human well-being, major information gaps exist. In most regions, for example, relatively little is known about the status and economic value of most ecosystem services, and their depletion is rarely tracked in national economic accounts. Limited information exists about the likelihood of nonlinear changes in ecosystems or the location of thresholds where such changes may be encountered. Basic global data on the extent and trend in different types of ecosystems and land use are surprisingly scarce. Models used to project future environmental and economic conditions have limited capability of incorporating ecological “feedbacks” including nonlinear changes in ecosystems. At the same time, decision-makers do not use all of the relevant information that is available. This is due in part to institutional failures that prevent existing policy-relevant scientific information from being made available to decision-makers. But it is also due to the failure to incorporate other forms of knowledge and information, such as traditional knowledge and practitioners’ knowledge, that are often of considerable value for ecosystem management. Promising interventions include: - Incorporate both the market and non-market values of ecosystems in resource management and investment decisions (G, B) (RWG). Most resource management and investment decisions are strongly influenced by considerations of the monetary costs and benefits of alternative policy choices. In the case of ecosystem management, however, this often leads to outcomes that are not in the interest of society, since the non-marketed values of ecosystems may exceed the marketed values. As a result, many existing resource management policies favor sectors such as agriculture, forestry, and fisheries at the expense of the use of these same ecosystems for water supply, recreation, and cultural services that may be of greater economic value. Decisions can be improved if they include the total economic value of alternative management options and involve deliberative mechanisms that bring to bear noneconomic considerations as well. - Use of all relevant forms of knowledge and information in assessments and decision-making, including traditional and practitioners’ knowledge (G, B, N) (RWG, C17.ES). Effective management of ecosystems typically requires “place-based” knowledge—information about the specific characteristics and history of an ecosystem. Formal scientific information is often one source of such information, but traditional knowledge or practitioners’ knowledge held by local resource managers can be of equal or greater value. While that knowledge is used in the decisions taken by those who have it, it is too rarely incorporated into other decision-making processes and is often inappropriately dismissed. - Enhance and sustain human and institutional capacity for assessing the consequences of ecosystem change for human well-being and acting on such assessments (G, B, N) (RWG). Greater technical capacity is needed for agriculture, forest, and fisheries management. But the capacity that exists for these sectors, as limited as it is in many countries, is still vastly greater than the capacity for effective management of other ecosystem services. Because awareness of the importance of these other services has only recently grown, there is limited experience with assessing ecosystem services fully. Serious limits exist in all countries, but especially in developing countries, in terms of the expertise needed in such areas as monitoring changes in ecosystem services, economic valuation or health assessment of ecosystem changes, and policy analysis related to ecosystem services. Even when such assessment information is available, however, the traditional highly sectoral nature of decision-making and resource management makes the implementation of recommendations difficult. This constraint can also be overcome through increased training of individuals in existing institutions and through institutional reforms to build capacity for more integrated responses. Design of Effective Decision-making Processes Decisions affecting ecosystems and their services can be improved by changing the processes used to reach those decisions. The context of decision-making about ecosystems is changing rapidly. The new challenge to decision-making is to make effective use of information and tools in this changing context in order to improve the decisions. At the same time, some old challenges must still be addressed. The decision-making process and the actors involved influence the intervention chosen. Decision-making processes vary across jurisdictions, institutions, and cultures. Yet the MA has identified the following elements of decision-making processes related to ecosystems and their services that tend to improve the decisions reached and their outcomes for ecosystems and human well-being (R18.ES): - Use the best available information, including considerations of the value of both marketed and non-marketed ecosystem services. - Ensure transparency and the effective and informed participation of important stakeholders. - Recognize that not all values at stake can be quantified, and thus quantification can provide a false objectivity in decision processes that have significant subjective elements. - Strive for efficiency, but not at the expense of effectiveness. - Consider equity and vulnerability in terms of the distribution of costs and benefits. - Ensure accountability and provide for regular monitoring and evaluation. - Consider cumulative and cross-scale effects and, in particular, assess trade-offs across different ecosystem services. A wide range of deliberative tools (which facilitate transparency and stakeholder participation), information-gathering tools (which are primarily focused on collecting data and opinions), and planning tools (which are typically used to evaluate potential policy options) can assist decision-making concerning ecosystems and their services (R3 Tables 3.6 to 3.8). Deliberative tools include neighborhood forums, citizens’ juries, community issues groups, consensus conferences, electronic democracy, focus groups, issue forums, and ecosystem service user forums. Examples of information-gathering tools include citizens’ research panels, deliberative opinion polls, environmental impact assessments, participatory rural appraisal, and rapid rural appraisal. Some common planning tools are consensus participation, cost-benefit analysis, multi-criteria analysis, participatory learning and action, stakeholder decision analysis, trade-off analysis, and visioning exercises. The use of decision-making methods that adopt a pluralistic perspective is particularly pertinent, since these techniques do not give undue weight to any particular viewpoint. These tools can be used at a variety of scales, including global, sub-global, and local. A variety of frameworks and methods can be used to make better decisions in the face of uncertainties in data, prediction, context, and scale (R4.5). Commonly used methods include cost-benefit or multi-criteria analyses, risk assessment, the precautionary principle, and vulnerability analysis. (See Table 8.1.) All these methods have been able to support optimization exercises, but few of them have much to say about equity. Cost-benefit analysis can, for example, be modified to weight the interests of some people more than others. The discount rate can be viewed, in long-term analyses, as a means of weighing the welfare of future generations; and the precautionary principle can be expressed in terms of reducing the exposure of certain populations or systems whose preferential status may be the result of equity considerations. Only multicriteria analysis was designed primarily to accommodate optimization across multiple objectives with complex interactions, but this can also be adapted to consider equity and threshold issues at national and sub-national scales. Finally, the existence and significance of various thresholds for change can be explored by several tools, but only the precautionary principle was designed explicitly to address such issues. Scenarios provide one way to cope with many aspects of uncertainty, but our limited understanding of ecological systems and human responses shrouds any individual scenario in it own characteristic uncertainty (R4.ES). Scenarios can be used to highlight the implications of alternative assumptions about critical uncertainties related to the behavior of human and ecological systems. In this way, they provide one means to cope with many aspects of uncertainty in assessing responses. The relevance, significance, and influence of scenarios ultimately depend on who is involved in their development (SG9.ES). At the same time, though, there are a number of reasons to be cautious in the use of scenarios. First, individual scenarios represent conditional projections based on specific assumptions. Thus, to the extent that our understanding and representation of the ecological and human systems represented in the scenarios is limited, specific scenarios are characterized by their own uncertainty. Second, there is uncertainty in translating the lessons derived from scenarios developed at one scale—say, global—to the assessment of responses at other scales—say, sub-national. Third, scenarios often have hidden and hard-to-articulate assumptions. Fourth, environmental scenarios have tended to more effectively incorporate state- of-the-art natural science modeling than social science modeling. Historically, most responses addressing ecosystem services have concentrated on the short-term benefits from increasing the productivity of provisioning services (RWG). Far less emphasis has been placed on managing regulating, cultural, and supporting ecosystem services; on management goals related to poverty alleviation and equitable distribution of benefits from ecosystem services; and on the long-term consequences of ecosystem change on the provision of services. As a result, the current management regime falls far short of the potential for meeting human needs and conserving ecosystems. Effective management of ecosystems requires coordinated responses at multiple scales (SG9, R17.ES). Responses that are successful at a small scale are often less successful at higher levels due to constraints in legal frameworks and government institutions that prevent their success. In addition, there appear to be limits to scaling up, not only because of these higher-level constraints, but also because interventions at a local level often address only direct drivers of change rather than indirect or underlying ones. For example, a local project to improve livelihoods of communities surrounding a protected area in order to reduce pressure on it, if successful, may increase migration into buffer zones, thereby adding to pressures. Cross-scale responses may be more effective at addressing the higher-level constraints and leakage problems and simultaneously tackling regional and national as well as local-level drivers of change. Examples of successful cross-scale responses include some co-management approaches to natural resource management in fisheries and forestry and multi-stakeholder policy processes (R15.ES). Active adaptive management can be a particularly valuable tool for reducing uncertainty about ecosystem management decisions (R17.4.5). The term “active” adaptive management is used here to emphasize the key characteristic of the original concept (which is frequently and inappropriately used to mean “learning by doing”): the design of management programs to test hypotheses about how components of an ecosystem function and interact and to thereby reduce uncertainty about the system more rapidly than would otherwise occur. Under an adaptive management approach, for example, a fisheries manager might intentionally set harvest levels either lower or higher than the “best estimate” in order to gain information more rapidly about the shape of the yield curve for the fishery. Given the high levels of uncertainty surrounding coupled socio-ecological systems, the use of active adaptive management is often warranted. What are the most important uncertainties hindering decision-making concerning ecosystems? The MA was unable to provide adequate scientific information to answer a number of important policy questions related to ecosystem services and human well-being. In some cases, the scientific information may well exist already but the process used and time frame available prevented either access to the needed information or its assessment. But in many cases either the data needed to answer the questions were unavailable or the knowledge of the ecological or social system was inadequate. We identify the following information gaps that, if addressed, could significantly enhance the ability of a process like the MA to answer policy-relevant questions posed by decision- makers (CWG, SWG, RWG, SGWG) Condition and Trends - There are major gaps in global and national monitoring systems that result in the absence of well-documented, comparable, time-series information for many ecosystem features and that pose significant barriers in assessing condition and trends in ecosystem services. Moreover, in a number of cases, including hydrological systems, the condition of the monitoring systems that do exist is declining. - Although for 30 years remote sensing capacity has been available that could enable rigorous global monitoring of land cover change, financial resources have not been available to process this information, and thus accurate measurements of land cover change are only available on a case study basis. - Information on land degradation in drylands is extremely poor. Major shortcomings in the currently available assessments point to the need for a systematic global monitoring program, leading to the development of a scientifically credible, consistent baseline of the state of land degradation and desertification. - There is little replicable data on global forest extent that can be tracked over time. - There is no reasonably accurate global map of wetlands. - There are major gaps in information on non-marketed ecosystem services, particularly regulating, cultural, and supporting services. - There is no complete inventory of species and limited information on the actual distributions of many important plant and animal species. - More information is needed concerning: - the nature of interactions among drivers in particular regions and across scales; - the responses of ecosystems to changes in the availability of important nutrients and carbon dioxide; - nonlinear changes in ecosystems, predictability of thresholds, and structural and dynamic characteristics of systems that lead to threshold and irreversible changes; and, - quantification and prediction of the relationships between biodiversity changes and changes in ecosystem services for particular places and times. - There is limited information on the economic consequences of changes in ecosystem services at any scale and, more generally, limited information on the details of linkages between human well-being and the provision of ecosystem services, except in the case of food and water. - There are relatively few models of the relationship between ecosystem services and human well-being. - There is a lack of analytical and methodological approaches to explicitly nest or link scenarios developed at different geographic scales. This innovation would provide decision-makers with information that directly links local, national, regional, and global futures of ecosystem services in considerable detail. - There is limited modeling capability related to effects of changes in ecosystems on flows of ecosystem services and effects of changes in ecosystem services on changes in human well-being. Quantitative models linking ecosystem change to many ecosystem services are also needed. - Significant advances are needed in models that link ecological and social processes, and models do not yet exist for many cultural and supporting ecosystem services. - There is limited capability to incorporate adaptive responses and changes in human attitudes and behaviors in models and to incorporate critical feedbacks into quantitative models. As food supply changes, for example, so will patterns of land use, which will then feed back on ecosystem services, climate, and food supply. - There is a lack of theories and models that anticipate thresholds that, once passed, yield fundamental system changes or even system collapse. - There is limited capability of communicating to nonspecialists the complexity associated with holistic models and scenarios involving ecosystem services, in particular in relation to the abundance of nonlinearities, feedbacks, and time lags in most ecosystems. - There is limited information on the marginal costs and benefits of alternative policy options in terms of total economic value (including non-marketed ecosystem services). - Substantial uncertainty exists with respect to who benefits from watershed services and how changes in particular watersheds influence those services; information in both of these areas is needed in order to determine whether markets for watershed services can be a fruitful response option. - There has been little social science analysis of the effectiveness of responses on biodiversity conservation. - There is considerable uncertainty with regards to the importance people in different cultures place on cultural services, how this changes over time, and how it influences the net costs and benefits of trade-offs and decisions. Disclaimer: This chapter is taken wholly from, or contains information that was originally written for the Millennium Ecosystem Assessment as published by the World Resources Institute. The content has not been modified by the Encyclopedia of Earth. This is a chapter from Ecosystems and Human Well-being: Synthesis (full report). Previous: Summary for Decision-makers | Table of Contents | Next: Appendix A
http://www.eoearth.org/article/Ecosystems_and_Human_Well-being_Synthesis:_Key_Questions_in_the_Millennium_Ecosystem_Assessment?topic=50013
13
62
Earth's magnetic field Earth's magnetic field (also known as the geomagnetic field) is the magnetic field that extends from the Earth's inner core to where it meets the solar wind, a stream of energetic particles emanating from the Sun. Its magnitude at the Earth's surface ranges from 25 to 65 µT (0.25 to 0.65 G). It is approximately the field of a magnetic dipole tilted at an angle of 11 degrees with respect to the rotational axis—as if there were a bar magnet placed at that angle at the center of the Earth. However, unlike the field of a bar magnet, Earth's field changes over time because it is generated by the motion of molten iron alloys in the Earth's outer core (the geodynamo). The Magnetic North Pole wanders, but slowly enough that a simple compass remains useful for navigation. At random intervals (averaging several hundred thousand years) the Earth's field reverses (the north and south geomagnetic poles change places with each other). These reversals leave a record in rocks that allow paleomagnetists to calculate past motions of continents and ocean floors as a result of plate tectonics. The region above the ionosphere, and extending several tens of thousands of kilometers into space, is called the magnetosphere. This region protects the Earth from cosmic rays that would strip away the upper atmosphere, including the ozone layer that protects the earth from harmful ultraviolet radiation. The Earth is largely protected from the solar wind, a stream of energetic charged particles emanating from the Sun, by its magnetic field, which deflects most of the charged particles. These particles would strip away the ozone layer, which protects the Earth from harmful ultraviolet rays. Calculations of the loss of carbon dioxide from the atmosphere of Mars, resulting from scavenging of ions by the solar wind, are consistent with a near-total loss of its atmosphere since the magnetic field of Mars dissipated . The polarity of the Earth's magnetic field is recorded in igneous rocks. Reversals of the field are detectable as "stripes" centered on mid-ocean ridges where the sea floor is spreading, while the stability of the geomagnetic poles between reversals allows paleomagnetists to track the past motion of continents (the study of past magnetic field is known as paleomagnetism). Reversals also provide the basis for magnetostratigraphy, a way of dating rocks and sediments. The field also magnetizes the crust; magnetic anomalies can be used to search for ores. Main characteristics At any location, the Earth's magnetic field can be represented by a three-dimensional vector (see figure). A typical procedure for measuring its direction is to use a compass to determine the direction of magnetic North. Its angle relative to true North is the declination (D) or variation. Facing magnetic North, the angle the field makes with the horizontal is the inclination (I) or dip. The intensity (F) of the field is proportional to the force it exerts on a magnet. Another common representation is in X (North), Y (East) and Z (Down) coordinates. The intensity of the field is greatest near the poles and weaker near the Equator. It is often measured in gauss (G) but is generally reported in nanotesla (nT), with 1 G = 100,000 nT. A nanotesla is also referred to as a gamma (γ). The field ranges between approximately 25,000 and 65,000 nT (0.25–0.65 G). By comparison, a strong refrigerator magnet has a field of about 100 G. A map of intensity contours is called an isodynamic chart. An isodynamic chart for the Earth's magnetic field is shown to the left. A minimum intensity occurs over South America while there are maxima over northern Canada, Siberia, and the coast of Antarctica south of Australia. The inclination is given by an angle that can assume values between -90° (up) to 90° (down). In the northern hemisphere, the field points downwards. It is straight down at the North Magnetic Pole and rotates upwards as the latitude decreases until it is horizontal (0°) at the magnetic equator. It continues to rotate upwards until it is straight up at the South Magnetic Pole. Inclination can be measured with a dip circle. An isoclinic chart (map of inclination contours) for the Earth's magnetic field is shown on the right. Declination is positive for an eastward deviation of the field relative to true north. It can be estimated by comparing the magnetic north/south heading on a compass with the direction of a celestial pole. Maps typically include information on the declination as an angle or a small diagram showing the relationship between magnetic north and true north. Information on declination for a region can be represented by a chart with isogonic lines (contour lines with each line representing a fixed declination). Geographical variation Dipolar approximation Near the surface of the Earth, its magnetic field can be closely approximated by the field of a magnetic dipole positioned at the center of the Earth and tilted at an angle of about 10° with respect to the rotational axis of the Earth. The dipole is roughly equivalent to a powerful bar magnet, with its south pole pointing towards the geomagnetic North Pole. This may seem surprising, but the north pole of a magnet is so defined because it is attracted towards the Earth's north pole. Since the north pole of a magnet attracts the south poles of other magnets and repels the north poles, it must be attracted to the south pole of Earth's magnet. The dipolar field accounts for 80–90% of the field in most locations. Magnetic poles The positions of the magnetic poles can be defined in at least two ways. The inclination of the Earth's field is 90° at the North Magnetic Pole and -90° at the South Magnetic Pole. The two poles wander independently of each other and are not directly opposite each other on the globe. They can migrate rapidly: movements of up to 40 km per year have been observed for the North Magnetic Pole. Over the last 180 years, the North Magnetic Pole has been migrating northwestward, from Cape Adelaide in the Boothia peninsula in 1831 to 600 km from Resolute Bay in 2001. The magnetic equator is the line where the inclination is zero (the magnetic field is horizontal). If a line is drawn parallel to the moment of the best-fitting magnetic dipole, the two positions where it intersects the Earth's surface are called the North and South geomagnetic poles. If the Earth's magnetic field were perfectly dipolar, the geomagnetic poles and magnetic dip poles would coincide and compasses would point towards them. However, the Earth's field has a significant contribution from non-dipolar terms, so the poles do not coincide and compasses do not generally point at either. Some of the charged particles from the solar wind are trapped in the Van Allen radiation belt. A smaller number of particles from the solar wind manage to travel, as though on an electromagnetic energy transmission line, to the Earth's upper atmosphere and ionosphere in the auroral zones. The only time the solar wind is observable on the Earth is when it is strong enough to produce phenomena such as the aurora and geomagnetic storms. Bright auroras strongly heat the ionosphere, causing its plasma to expand into the magnetosphere, increasing the size of the plasma geosphere, and causing escape of atmospheric matter into the solar wind. Geomagnetic storms result when the pressure of plasmas contained inside the magnetosphere is sufficiently large to inflate and thereby distort the geomagnetic field. The solar wind is responsible for the overall shape of Earth's magnetosphere, and fluctuations in its speed, density, direction, and entrained magnetic field strongly affect Earth's local space environment. For example, the levels of ionizing radiation and radio interference can vary by factors of hundreds to thousands; and the shape and location of the magnetopause and bow shock wave upstream of it can change by several Earth radii, exposing geosynchronous satellites to the direct solar wind. These phenomena are collectively called space weather. The mechanism of atmospheric stripping is caused by gas being caught in bubbles of magnetic field, which are ripped off by solar winds. Variations in the magnetic field strength have been correlated to rainfall variation within the tropics. Time dependence Short-term variations The geomagnetic field changes on time scales from milliseconds to millions of years. Shorter time scales mostly arise from currents in the ionosphere (ionospheric dynamo region) and magnetosphere, and some changes can be traced to geomagnetic storms or daily variations in currents. Changes over time scales of a year or more mostly reflect changes in the Earth's interior, particularly the iron-rich core. Data from THEMIS show that the magnetic field, which interacts with the solar wind, is reduced when the magnetic orientation is aligned between Sun and Earth - opposite to the previous hypothesis. During forthcoming solar storms, this could result in blackouts and disruptions in artificial satellites. Secular variation Changes in Earth's magnetic field on a time scale of a year or more are referred to as secular variation. Over hundreds of years, magnetic declination is observed to vary over tens of degrees. A movie on the right shows how global declinations have changed over the last few centuries. The direction and intensity of the dipole change over time. Over the last two centuries the dipole strength has been decreasing at a rate of about 6.3% per century. At this rate of decrease, the field would reach zero in about 1600 years. However, this strength is about average for the last 7 thousand years, and the current rate of change is not unusual. A prominent feature in the non-dipolar part of the secular variation is a westward drift at a rate of about 0.2 degrees per year. This drift is not the same everywhere and has varied over time. The globally averaged drift has been westward since about 1400 AD but eastward between about 1000 AD and 1400 AD. Changes that predate magnetic observatories are recorded in archaeological and geological materials. Such changes are referred to as paleomagnetic secular variation or paleosecular variation (PSV). The records typically include long periods of small change with occasional large changes reflecting geomagnetic excursions and geomagnetic reversals. Magnetic field reversals Although the Earth's field is generally well approximated by a magnetic dipole with its axis near the rotational axis, there are occasional dramatic events where the North and South geomagnetic poles trade places. These events are called geomagnetic reversals. Evidence for these events can be found worldwide in basalts, sediment cores taken from the ocean floors, and seafloor magnetic anomalies. Reversals occur at apparently random intervals ranging from less than 0.1 million years to as much as 50 million years. The most recent such event, called the Brunhes–Matuyama reversal, occurred about 780,000 years ago. However, a study published in 2012 by a group from the German Research Center for Geosciences suggests that a brief complete reversal occurred only 41,000 years ago during the last ice age. The past magnetic field is recorded mostly by iron oxides, such as magnetite, that have some form of ferrimagnetism or other magnetic ordering that allows the Earth's field to magnetize them. This remanent magnetization, or remanence, can be acquired in more than one way. In lava flows, the direction of the field is "frozen" in small magnetic particles as they cool, giving rise to a thermoremanent magnetization. In sediments, the orientation of magnetic particles acquires a slight bias towards the magnetic field as they are deposited on an ocean floor or lake bottom. This is called detrital remanent magnetization. Thermoremanent magnetization is the form of remanence that gives rise to the magnetic anomalies around ocean ridges. As the seafloor spreads, magma wells up from the mantle and cools to form new basaltic crust. During the cooling, the basalt records the direction of the Earth's field. This new basalt forms on both sides of the ridge and moves away from it. When the Earth's field reverses, new basalt records the reversed direction. The result is a series of stripes that are symmetric about the ridge. A ship towing a magnetometer on the surface of the ocean can detect these stripes and infer the age of the ocean floor below. This provides information on the rate at which seafloor has spread in the past. Radiometric dating of lava flows has been used to establish a geomagnetic polarity time scale, part of which is shown in the image. This forms the basis of magnetostratigraphy, a geophysical correlation technique that can be used to date both sedimentary and volcanic sequences as well as the seafloor magnetic anomalies. Studies of lava flows on Steens Mountain, Oregon, indicate that the magnetic field could have shifted at a rate of up to 6 degrees per day at some time in Earth's history, which significantly challenges the popular understanding of how the Earth's magnetic field works. Temporary dipole tilt variations that take the dipole axis across the equator and then back to the original polarity are known as excursions. Earliest appearance At present, the overall geomagnetic field is becoming weaker; the present strong deterioration corresponds to a 10–15% decline over the last 150 years and has accelerated in the past several years; geomagnetic intensity has declined almost continuously from a maximum 35% above the modern value achieved approximately 2,000 years ago. The rate of decrease and the current strength are within the normal range of variation, as shown by the record of past magnetic fields recorded in rocks (figure on right). The nature of Earth's magnetic field is one of heteroscedastic fluctuation. An instantaneous measurement of it, or several measurements of it across the span of decades or centuries, are not sufficient to extrapolate an overall trend in the field strength. It has gone up and down in the past for no apparent reason. Also, noting the local intensity of the dipole field (or its fluctuation) is insufficient to characterize Earth's magnetic field as a whole, as it is not strictly a dipole field. The dipole component of Earth's field can diminish even while the total magnetic field remains the same or increases. The Earth's magnetic north pole is drifting from northern Canada towards Siberia with a presently accelerating rate—10 km per year at the beginning of the 20th century, up to 40 km per year in 2003, and since then has only accelerated. Physical origin Earth's core and the geodynamo The Earth's magnetic field is mostly caused by electric currents in the liquid outer core, which is composed of highly conductive molten iron. A magnetic field is generated by a feedback loop: current loops generate magnetic fields (Ampère's circuital law); a changing magnetic field generates an electric field (Faraday's law); and the electric and magnetic fields exert a force on the charges that are flowing in currents (the Lorentz force). These effects can be combined in a partial differential equation for the magnetic field called the magnetic induction equation: where u is the velocity of the fluid, B is the magnetic B-field; and η=1/σμ is the magnetic diffusivity with σ electrical conductivity and μ permeability. The term ∂B/∂t is the time derivative of the field; ∇2 is the Laplace operator and ∇× is the curl operator. The first term on the right hand side of the induction equation is a diffusion term. In a stationary fluid, the magnetic field declines and any concentrations of field spread out. If the Earth's dynamo shut off, the dipole part would disappear in a few tens of thousands of years. In a perfect conductor (σ=∞), there would be no diffusion. By Lenz's law, any change in the magnetic field would be immediately opposed by currents, so the flux through a given volume of fluid could not change. As the fluid moved, the magnetic field would go with it. The theorem describing this effect is called the frozen-in-field theorem. Even in a fluid with a finite conductivity, new field is generated by stretching field lines as the fluid moves in ways that deform it. This process could go on generating new field indefinitely, were it not that as the magnetic field increases in strength, it resists fluid motion. The motion of the fluid is sustained by convection, motion driven by buoyancy. The temperature increases towards the center of the Earth, and the higher temperature of the fluid lower down makes it buoyant. This buoyancy is enhanced by chemical separation: As the core cools, some of the molten iron solidifies and is plated to the inner core. In the process, lighter elements are left behind in the fluid, making it lighter. This is called compositional convection. A Coriolis effect, caused by the overall planetary rotation, tends to organize the flow into rolls aligned along the north-south polar axis. The mere convective motion of an electrically conductive fluid is not enough to ensure the generation of a magnetic field. The above model assumes the motion of charges (such as electrons with respect to atomic nuclei), which is a requirement for generating a magnetic field. However, it is not clear how this motion of charges arises in the circulating fluid of the outer core. Possible mechanisms may include electrochemical reactions which create the equivalent of a battery generating electrical current in the fluid or, a thermoelectric effect (both mechanisms somehow discredited). More robustly, remnant magnetic fields in magnetic materials in the mantle, which are cooler than their Curie temperature, would provide seed “stator” magnetic fields that would induce the required growing currents in the convectively driven fluid behaving as a dynamo, as analyzed by Dr. Philip William Livermore. The average magnetic field in the Earth's outer core was calculated to be 25 G, 50 times stronger than the field at the surface. Numerical models The equations for the geodynamo are enormously difficult to solve, and the realism of the solutions is limited mainly by computer power. For decades, theorists were confined to creating kinematic dynamos in which the fluid motion is chosen in advance and the effect on the magnetic field calculated. Kinematic dynamo theory was mainly a matter of trying different flow geometries and seeing whether they could sustain a dynamo. The first self-consistent dynamo models, ones that determine both the fluid motions and the magnetic field, were developed by two groups in 1995, one in Japan and one in the United States. The latter received a lot of attention because it successfully reproduced some of the characteristics of the Earth's field, including geomagnetic reversals. Currents in the ionosphere and magnetosphere Electric currents induced in the ionosphere generate magnetic fields (ionospheric dynamo region). Such a field is always generated near where the atmosphere is closest to the Sun, causing daily alterations that can deflect surface magnetic fields by as much as one degree. Typical daily variations of field strength are about 25 nanoteslas (nT) (i.e. ~ 1:2,000), with variations over a few seconds of typically around 1 nT (i.e. ~ 1:50,000). Crustal magnetic anomalies Magnetometers detect minute deviations in the Earth's magnetic field caused by iron artifacts, kilns, some types of stone structures, and even ditches and middens in archaeological geophysics. Using magnetic instruments adapted from airborne magnetic anomaly detectors developed during World War II to detect submarines, the magnetic variations across the ocean floor have been mapped. Basalt — the iron-rich, volcanic rock making up the ocean floor — contains a strongly magnetic mineral (magnetite) and can locally distort compass readings. The distortion was recognized by Icelandic mariners as early as the late 18th century. More important, because the presence of magnetite gives the basalt measurable magnetic properties, these magnetic variations have provided another means to study the deep ocean floor. When newly formed rock cools, such magnetic materials record the Earth's magnetic field. Measurement and analysis The Earth's magnetic field strength was measured by Carl Friedrich Gauss in 1835 and has been repeatedly measured since then, showing a relative decay of about 10% over the last 150 years. The Magsat satellite and later satellites have used 3-axis vector magnetometers to probe the 3-D structure of the Earth's magnetic field. The later Ørsted satellite allowed a comparison indicating a dynamic geodynamo in action that appears to be giving rise to an alternate pole under the Atlantic Ocean west of S. Africa. Governments sometimes operate units that specialize in measurement of the Earth's magnetic field. These are geomagnetic observatories, typically part of a national Geological Survey, for example the British Geological Survey's Eskdalemuir Observatory. Such observatories can measure and forecast magnetic conditions that sometimes affect communications, electric power, and other human activities. (See magnetic storm.) The International Real-time Magnetic Observatory Network, with over 100 interlinked geomagnetic observatories around the world has been recording the earths magnetic field since 1991. The military determines local geomagnetic field characteristics, in order to detect anomalies in the natural background that might be caused by a significant metallic object such as a submerged submarine. Typically, these magnetic anomaly detectors are flown in aircraft like the UK's Nimrod or towed as an instrument or an array of instruments from surface ships. Statistical models Each measurement of the magnetic field is at a particular place and time. If an accurate estimate of the field at some other place and time is needed, the measurements must be converted to a model and the model used to make predictions. Spherical harmonics The most common way of analyzing the global variations in the Earth's magnetic field is to fit the measurements to a set of spherical harmonics. This was first done by Carl Friedrich Gauss. Spherical harmonics are functions that oscillate over the surface of a sphere. They are the product of two functions, one that depends on latitude and one on longitude. The function of longitude is zero along zero or more great circles passing through the North and South Poles; the number of such nodal lines is the absolute value of the order m. The function of latitude is zero along zero or more latitude circles; this plus the order is equal to the degree ℓ. Each harmonic is equivalent to a particular arrangement of magnetic charges at the center of the Earth. A monopole is an isolated magnetic charge, which has never been observed. A dipole is equivalent to two opposing charges brought close together and a quadrupole to two dipoles brought together. A quadrupole field is shown in the lower figure on the right. Spherical harmonics can represent any scalar field (function of position) that satisfies certain properties. A magnetic field is a vector field, but if it is expressed in Cartesian components X, Y, Z, each component is the derivative of the same scalar function called the magnetic potential. Analyses of the Earth's magnetic field use a modified version of the usual spherical harmonics that differ by a multiplicative factor. A least-squares fit to the magnetic field measurements gives the Earth's field as the sum of spherical harmonics, each multiplied by the best-fitting Gauss coefficient gmℓ or hmℓ. The lowest-degree Gauss coefficient, g00, gives the contribution of an isolated magnetic charge, so it is zero. The next three coefficients – g10, g11, and h11 – determine the direction and magnitude of the dipole contribution. The best fitting dipole is tilted at an angle of about 10° with respect to the rotational axis, as described earlier. Radial dependence Spherical harmonic analysis can be used to distinguish internal from external sources if measurements are available at more than one height (for example, ground observatories and satellites). In that case, each term with coefficient gmℓ or hmℓ can be split into two terms: one that decreases with radius as 1/rℓ+1 and one that increases with radius as rℓ. The increasing terms fit the external sources (currents in the ionosphere and magnetosphere). However, averaged over a few years the external contributions average to zero. The remaining terms predict that the potential of a dipole source (ℓ=1) drops off as 1/r2. The magnetic field, being a derivative of the potential, drops off as 1/r3. Quadrupole terms drop off as 1/r4, and higher order terms drop off increasingly rapidly with the radius. The radius of the outer core is about half of the radius of the Earth. If the field at the core-mantle boundary is fit to spherical harmonics, the dipole part is smaller by a factor of about ⅛ at the surface, the quadrupole part 1⁄16, and so on. Thus, only the components with large wavelengths can be noticeable at the surface. From a variety of arguments, it is usually assumed that only terms up to degree 14 or less have their origin in the core. These have wavelengths of about 2000 km or less. Smaller features are attributed to crustal anomalies. Global models The International Association of Geomagnetism and Aeronomy maintains a standard global field model called the International Geomagnetic Reference Field. It is updated every 5 years. The 11th-generation model, IGRF11, was developed using data from satellites (Ørsted, CHAMP and SAC-C) and a world network of geomagnetic observatories. The spherical harmonic expansion was truncated at degree 10, with 120 coefficients, until 2000. Subsequent models are truncated at degree 13 (195 coefficients). Another global field model is produced jointly by the National Geophysical Data Center and the British Geological Survey. This model truncates at degree 12 (168 coefficients). It is the model used by the United States Department of Defense, the Ministry of Defence (United Kingdom), the North Atlantic Treaty Organization, and the International Hydrographic Office as well as in many civilian navigation systems. A third model, produced by the Goddard Space Flight Center (NASA and GSFC) and the Danish Space Research Institute, uses a "comprehensive modeling" approach that attempts to reconcile data with greatly varying temporal and spatial resolution from ground and satellite sources. Animals including birds and turtles can detect the Earth's magnetic field, and use the field to navigate during migration. Cows and wild deer tend to align their bodies north-south while relaxing, but not when the animals are under high voltage power lines, leading researchers to believe magnetism is responsible. See also Magnetic survey ships: References and Bibliography - Glatzmaier, Gary A.; Roberts, Paul H. (1995). "A three-dimensional self-consistent computer simulation of a geomagnetic field reversal". Nature 377 (6546): 203–209. Bibcode:1995Natur.377..203G. doi:10.1038/377203a0. - Glatzmaier, Gary. "The Geodynamo". University of California Santa Cruz. Retrieved October 2011. - Quirin Shlermeler (3 March 2005). "Solar wind hammers the ozone layer". nature news. doi:10.1038/news050228-12. - Luhmann, Johnson & Zhang 1992 - McElhinny, Michael W.; McFadden, Phillip L. (2000). Paleomagnetism: Continents and Oceans. Academic Press. ISBN 0-12-483355-1. - Opdyke, Neil D.; Channell, James E. T. (1996). Magnetic Stratigraphy. Academic Press. ISBN 978-0-12-527470-8. - Mussett, Alan E.; Khan, M. Aftab (2000). Looking into the Earth: An introduction to Geological Geophysics. Cambridge University Press. ISBN 0-521-78085-3. - Temple, Robert (2006). The Genius of China. Andre Deutsch. ISBN 0-671-62028-2. - Merrill, McElhinny & McFadden 1996, Chapter 2 - These are the units of a magnetic B-field. The magnetic H-field has different units, but outside of the Earth's core they are proportional to each other. - National Geophysical Data Center. "Geomagnetism Frequently Asked Questions". Geomagnetism. NOAA. Retrieved October 2011. - Campbell 2003, p. 7 - Palm, Eric (2011). "Tesla". National High Magnetic Field Laboratory. Retrieved October 2011. - Campbell, Wallace A. (1996). ""Magnetic" pole locations on global charts are incorrect". Eos, Transactions, American Geophysical Union 77 (36): 345. Bibcode:1996EOSTr..77..345C. doi:10.1029/96EO00237. - "The Magnetic North Pole". Ocean bottom magnetology laboratory. Woods Hole Oceanographic Institution. Retrieved June 2012. - "Earth's Inconstant Magnetic Field". NASA Science—Science News. 29 December 2003. Retrieved September 2011. - "Solar wind ripping chunks off Mars". Cosmos Online. 25 November 2008. Retrieved September 2011. - "Link found between tropical rainfall and Earth's magnetic field". Planet Earth Online (National Environment Research Council). 20 January 2009. Retrieved 19 April 2012. - Steigerwald, Bill (16 December 2008). "Sun Often "Tears Out A Wall" In Earth's Solar Storm Shield". THEMIS: Understanding space weather. NASA. Retrieved 20 August 2011. - Jackson, Andrew; Jonkers, Art R. T.; Walker, Matthew R. (2000). "Four centuries of Geomagnetic Secular Variation from Historical Records". Philosophical Transactions of the Royal Society A 358 (1768): 957–990. Bibcode:2000RSPTA.358..957J. doi:10.1098/rsta.2000.0569. JSTOR 2666741. - "Secular variation". Geomagnetism. Canadian Geological Survey. 2011. Retrieved July 18, 2011. - Constable, Catherine (2007). "Dipole Moment Variation". In Gubbins, David; Herrero-Bervera, Emilio. Encyclopedia of Geomagnetism and Paleomagnetism. Springer-Verlag. pp. 159–161. doi:10.1007/978-1-4020-4423-6_67. ISBN 978-1-4020-3992-8 - Dumberry, Mathieu; Finlay, Christopher C. (2007). "Eastward and westward drift of the Earth's magnetic field for the last three millennia". Earth and Planetary Science Letters 254: 146–157. Bibcode:2007E&PSL.254..146D. doi:10.1016/j.epsl.2006.11.026. - Tauxe 1998, Ch. 1 - Merrill, McElhinny & McFadden 1996, Chapter 5 - Phillips, Tony (December 29, 2003). "Earth's Inconstant Magnetic Field". Science@Nasa. Retrieved December 27, 2009. - "Ice Age Polarity Reversal Was Global Event: Extremely Brief Reversal of Geomagnetic Field, Climate Variability, and Super Volcano". Sciencedaily.com. 2012-10-16. doi:10.1016/j.epsl.2012.06.050. Retrieved 2013-03-21. - Coe, R. S.; Prévot, M.; Camps, P. (20 April 1995). "New evidence for extraordinarily rapid change of the geomagnetic field during a reversal". Nature 374 (6524): 687. Bibcode:1995Natur.374..687C. doi:10.1038/374687a0. - McElhinney, T. N. W.; Senanayake, W. E. (1980). "Paleomagnetic Evidence for the Existence of the Geomagnetic Field 3.5 Ga Ago". Journal of Geophysical Research 85: 3523. Bibcode:1980JGR....85.3523M. doi:10.1029/JB085iB07p03523. - Usui, Yoichi; Tarduno, John A., Watkeys, Michael, Hofmann, Axel, Cottrell, Rory D. (2009). "Evidence for a 3.45-billion-year-old magnetic remanence: Hints of an ancient geodynamo from conglomerates of South Africa". Geochemistry Geophysics Geosystems 10 (9). Bibcode:2009GGG....1009Z07U. doi:10.1029/2009GC002496. - Tarduno, J. A.; Cottrell, R. D., Watkeys, M. K., Hofmann, A., Doubrovine, P. V., Mamajek, E. E., Liu, D., Sibeck, D. G., Neukirch, L. P., Usui, Y. (4 March 2010). "Geodynamo, Solar Wind, and Magnetopause 3.4 to 3.45 Billion Years Ago". Science 327 (5970): 1238–1240. Bibcode:2010Sci...327.1238T. doi:10.1126/science.1183445. PMID 20203044. - "Earth's Inconstant Magnetic Field". Retrieved 2011-01-07. - Lovett, Richard A. (December 24, 2009). "North Magnetic Pole Moving Due to Core Flux". - Merrill, McElhinny & McFadden 1996, Chapter 8 - Buffett, B. A. (2000). "Earth's Core and the Geodynamo". Science 288 (5473): 2007–2012. Bibcode:2000Sci...288.2007B. doi:10.1126/science.288.5473.2007. - "Simulating the geodynamo". Retrieved December 2012. - "Magnetic Stability Analysis for the Geodynamo". Retrieved December 2012. - Buffett, Bruce A. (2010). "Tidal dissipation and the strength of the Earth's internal magnetic field". Nature 468 (7326): 952–954. Bibcode:2010Natur.468..952B. doi:10.1038/nature09643. PMID 21164483. Lay summary – Science 20. - Kono, Masaru; Paul H. Roberts (2002). "Recent geodynamo simulations and observations of the geomagnetic field". Reviews of Geophysics 40 (4): 1–53. Bibcode:2002RvGeo..40.1013K. doi:10.1029/2000RG000102. - Kageyama, Akira; Sato, Tetsuya, the Complexity Simulation Group, (1 January 1995). "Computer simulation of a magnetohydrodynamic dynamo. II". Physics of Plasmas 2 (5): 1421–1431. Bibcode:1995PhPl....2.1421K. doi:10.1063/1.871485. - Glatzmaier, G; Paul H. Roberts (1995). "A three-dimensional convective dynamo solution with rotating and finitely conducting inner core and mantle". Physics of the Earth and Planetary Interiors 91 (1–3): 63–75. doi:10.1016/0031-9201(95)03049-3. - Stepišnik, Janez (2006). "Spectroscopy: NMR down to Earth". Nature 439 (7078): 799–801. Bibcode:2006Natur.439..799S. doi:10.1038/439799a. - Frey, Herbert. "Satellite Magnetic Models". Comprehensive Modeling of the Geomagnetic Field. NASA. Retrieved 13 October 2011. - Courtillot, Vincent; Le Mouël, Jean Louis (1988). "Time Variations of the Earth's Magnetic Field: From Daily to Secular". Annual Review of Earth and Planetary Science 1988 (16): 435. Bibcode:1988AREPS..16..389C. doi:10.1146/annurev.ea.16.050188.002133. - Hulot, G.; Eymin, C.; Langlais, B.; Mandea, M.; Olsen, N. (April 2002). "Small-scale structure of the geodynamo inferred from Oersted and Magsat satellite data". Nature 416 (6881): 620–623. doi:10.1038/416620a. PMID 11948347. - Finlay (31 December 2010). "Evaluation of candidate geomagnetic field models for IGRF-11". Earth, Planets and Space 62 (10): 787–804. Bibcode:2010EP&S...62..787F. doi:10.5047/eps.2010.11.005. - "The International Geomagnetic Reference Field: A "Health" Warning". IAGA Division V-MOD Geomagnetic Field Modeling. NOAA. January 2010. Retrieved 13 October 2011. - "The World Magnetic Model". Geomagnetism. NOAA. Retrieved 14 October 2011. - Herbert, Frey. "Comprehensive Modeling of the Geomagnetic Field". NASA. - Deutschlander, M.; Phillips, J.; Borland, S. (1999). "The case for light-dependent magnetic orientation in animals". Journal of Experimental Biology 202 (8): 891–908. PMID 10085262. - Burda, H.; Begall, S.; Cerveny, J.; Neef, J.; Nemec, P. (Mar 2009). "Extremely low-frequency electromagnetic fields disrupt magnetic alignment of ruminants". Proceedings of the National Academy of Sciences of the United States of America 106 (14): 5708–5713. Bibcode:2009PNAS..106.5708B. doi:10.1073/pnas.0811194106. PMC 2667019. PMID 19299504. - Dyson, P. J. (2009). "Biology: Electric cows". Nature 458 (7237): 389. Bibcode:2009Natur.458Q.389.. doi:10.1038/458389a. PMID 19325587. - Herndon, J. M. (1996-01-23). "Substructure of the inner core of the Earth". PNAS 93 (2): 646–648. Bibcode:1996PNAS...93..646H. doi:10.1073/pnas.93.2.646. PMC 40105. PMID 11607625. - Hollenbach, D. F.; Herndon, J. M. (2001-09-25). "Deep-Earth reactor: Nuclear fission, helium, and the geomagnetic field". PNAS 98 (20): 11085–90. Bibcode:2001PNAS...9811085H. doi:10.1073/pnas.201393998. PMC 58687. PMID 11562483. - Luhmann, J. G.; Johnson, R. E.; Zhang, M. H. G. (1992). "Evolutionary impact of sputtering of the Martian atmosphere by O+ pickup ions". Geophysical Research Letters 19 (21): 2151–2154. Bibcode:1992GeoRL..19.2151L. doi:10.1029/92GL02485. Further reading - Campbell, Wallace H. (2003). Introduction to geomagnetic fields (2nd ed.). New York: Cambridge University Press. ISBN 978-0-521-52953-2. - Comins, Neil F. (2008). Discovering the Essential Universe (Fourth ed.). W. H. Freeman. ISBN 978-1-4292-1797-2. - Love, Jeffrey J. (2008). "Magnetic monitoring of Earth and space". Physics Today 61 (2): 31–37. Bibcode:2008PhT....61b..31H. doi:10.1063/1.2883907. - Merrill, Ronald T. (2010). Our Magnetic Earth: The Science of Geomagnetism. University of Chicago Press. ISBN 0-226-52050-1. - Merrill, Ronald T.; McElhinny, Michael W.; McFadden, Phillip L. (1996). The magnetic field of the earth: paleomagnetism, the core, and the deep mantle. Academic Press. ISBN 978-0-12-491246-5. - "Temperature of the Earth's core". NEWTON Ask a Scientist. 1999. Retrieved September 2011. - Tauxe, Lisa (1998). Paleomagnetic Principles and Practice. Kluwer. ISBN 0-7923-5258-0. - Towle, J. N. (1984). "The Anomalous Geomagnetic Variation Field and Geoelectric Structure Associated with the Mesa Butte Fault System, Arizona". Geological Society of America Bulletin 9 (2): 221–225. doi:10.1130/0016-7606(1984)95<221:TAGVFA>2.0.CO;2. - Wait, James R. (1954). "On the relation between telluric currents and the earth's magnetic field". Geophysics 19 (2): 281–289. Bibcode:1954Geop...19..281W. doi:10.1190/1.1437994. - Walt, Martin (1994). Introduction to Geomagnetically Trapped Radiation. Cambridge University Press. ISBN 978-0-521-61611-9. |Wikimedia Commons has media related to: Earth's magnetic field| - Geomagnetism & Paleomagnetism background material. American Geophysical Union Geomagnetism and Paleomagnetism Section. - National Geomagnetism Program. United States Geological Survey, March 8, 2011. - BGS Geomagnetism. Information on monitoring and modeling the geomagnetic field. British Geological Survey, August 2005. - William J. Broad, Will Compasses Point South?. New York Times, July 13, 2004. - John Roach, Why Does Earth's Magnetic Field Flip?. National Geographic, September 27, 2004. - Magnetic Storm. PBS NOVA, 2003. (ed. about pole reversals) - When North Goes South. Projects in Scientific Computing, 1996. - The Great Magnet, the Earth, History of the discovery of Earth's magnetic field by David P. Stern. - Exploration of the Earth's Magnetosphere, Educational web site by David P. Stern and Mauricio Peredo - Dr. Dan Lathrop: The study of the Earth's magnetic field. Interview with Dr. Dan Lathrop, Geophysicist at the University of Maryland, about his experiments with the Earth's core and magnetic field. 7 - 3 - 2008 - International Geomagnetic Reference Field 2011 - Global evolution/anomaly of the Earth's magnetic field Sweeps are in 10 degree steps at 10 years intervals. Based on data from: The Institute of Geophysics, ETH Zurich
http://en.wikipedia.org/wiki/Earth's_magnetic_field
13
87
Induction is a specific form of reasoning in which the premises of an argument support a conclusion, but do not ensure it. The topic of induction is important in analytic philosophy for several reasons and is discussed in several philosophical sub-fields, including logic, epistemology, and philosophy of science. However, the most important philosophical interest in induction lies in the problem of whether induction can be "justified." This problem is often called "the problem of induction" and was discovered by the Scottish philosopher David Hume (1711-1776). Therefore, it would be worthwhile to define what philosophers mean by "induction" and to distinguish it from other forms of reasoning. It would also be helpful to present Hume’s problem of induction, Nelson Goodman’s (1906-1998) new riddle of induction, and statistical as well as probabilistic inference as potential solutions to these problems. The sort of induction that philosophers are interested in is known as enumerative induction. Enumerative induction (or simply induction) comes in two types, "strong" induction and "weak" induction. Strong induction has the following form: A1 is a B1. A2 is a B2. An is a Bn. Therefore, all As are Bs. An example of strong induction is that all ravens are black because each raven that has ever been observed has been black. But notice that one need not make such a strong inference with induction because there are two types, the other being weak induction. Weak induction has the following form: A1 is a B1. A2 is a B2. An is a Bn. Therefore, the next A will be a B. An example of weak induction is that because every raven that has ever been observed has been black, the next observed raven will be black. Enumerative induction should not be confused with mathematical induction. While enumerative induction concerns matters of empirical fact, mathematical induction concerns matters of mathematical fact. Specifically, mathematical induction is what mathematicians use to make claims about an infinite set of mathematical objects. Mathematical induction is different from enumerative induction because mathematical induction guarantees the truth of its conclusions since it rests on what is called an “inductive definition” (sometimes called a “recursive definition”). Inductive definitions define sets (usually infinite sets) of mathematical objects. They consist of a base clause specifying the basic elements of the set, one or more inductive clauses specifying how additional elements are generated from existing elements, and a final clause stipulating that all of the elements in the set are either basic or in the set because of one or more applications of the inductive clause or clauses (Barwise and Etchemendy 2000, 567). For example, the set of natural numbers (N) can be inductively defined as follows: 1. 0 is an element in N 2. For any element x, if x is an element in N, then (x + 1) is an element in N. 3. Nothing else is an element in N unless it satisfies condition (1) or (2). Thus, in this example, (1) is the base clause, (2) is the inductive clause, and (3) is the final clause. Now inductive definitions are helpful because, as mentioned before, mathematical inductions are infallible precisely because they rest on inductive definitions. Consider the following mathematical induction that proves the sum of the numbers between 0 and a natural number n (Sn) is such that Sn = ½n(n + 1), which is a result first proven by the mathematician Carl Frederick Gauss [1777-1855]: First, we know that 0 = ½(0)(0 + 1) = 0. Now assume Sm = ½m(m + 1) for some natural number m. Then if Sm + 1 represents Sm + (m + 1), it follows that Sm + (m + 1) = ½m(m + 1) + (m + 1). Furthermore, since ½m(m + 1) + (m + 1) = ½m2 + 1.5m + 1, it follows that ½ m2 + 1.5m + 1 = (½m + ½)(n + 2). But then, (½m + ½)(n + 2) = ½(m + 1)((n + 1) + 1). Since the first subproof shows that 0 is in the set that satisfies Sn = ½n(n + 1), and the second subproof shows that for any number that satisfies Sn = ½n(n + 1), the natural number that is consecutive to it satisfies Sn = ½n(n + 1), then by the inductive definition of N, N has the same elements as the set that satisfies Sn = ½n(n + 1). Thus, Sn = ½n(n + 1) holds for all natural numbers. Notice that the above mathematical induction is infallible because it rests on the inductive definition of N. However, unlike mathematical inductions, enumerative inductions are not infallible because they do not rest on inductive definitions. Induction contrasts with two other important forms of reasoning: Deduction and abduction. Deduction is a form of reasoning whereby the premises of the argument guarantee the conclusion. Or, more precisely, in a deductive argument, if the premises are true, then the conclusion is true. There are several forms of deduction, but the most basic one is modus ponens, which has the following form: If A, then B Deductions are unique because they guarantee the truth of their conclusions if the premises are true. Consider the following example of a deductive argument: Either Tim runs track or he plays tennis. Tim does not play tennis. Therefore, Tim runs track. There is no way that the conclusion of this argument can be false if its premises are true. Now consider the following inductive argument: Every raven that has ever been observed has been black. Therefore, all ravens are black. This argument is deductively invalid because its premises can be true while its conclusion is false. For instance, some ravens could be brown although no one has seen them yet. Thus a feature of induction is that they are deductively invalid. Abduction is a form of reasoning whereby an antecedent is inferred from its consequent. The form of abduction is below: If A, then B Notice that abduction is deductively invalid as well because the truth of the premises in an abductive argument does not guarantee the truth of their conclusions. For example, even if all dogs have legs, seeing legs does not imply that they belong to a dog. Abduction is also distinct from induction, although both forms of reasoning are used amply in everyday as well as scientific reasoning. While both forms of reasoning do not guarantee the truth of their conclusions, scientists since Isaac Newton (1643-1727) have believed that induction is a stronger form of reasoning than abduction. The problem of induction David Hume questioned whether induction was a strong form of reasoning in his classic text, A Treatise of Human Nature. In this text, Hume argues that induction is an unjustified form of reasoning for the following reason. One believes inductions are good because nature is uniform in some deep respect. For instance, one induces that all ravens are black from a small sample of black ravens because he believes that there is a regularity of blackness among ravens, which is a particular uniformity in nature. However, why suppose there is a regularity of blackness among ravens? What justifies this assumption? Hume claims that one knows that nature is uniform either deductively or inductively. However, one admittedly cannot deduce this assumption and an attempt to induce the assumption only makes a justification of induction circular. Thus, induction is an unjustifiable form of reasoning. This is Hume's problem of induction. Instead of becoming a skeptic about induction, Hume sought to explain how people make inductions, and considered this explanation as good of a justification of induction that could be made. Hume claimed that one make inductions because of habits. In other words, habit explains why one induces that all ravens are black from seeing nothing but black ravens beforehand. The new riddle of induction Nelson Goodman (1955) questioned Hume’s solution to the problem of induction in his classic text Fact, Fiction, and Forecast. Although Goodman thought Hume was an extraordinary philosopher, he believed that Hume made one crucial mistake in identifying habit as what explains induction. The mistake is that people readily develop habits to make some inductions but not others, even though they are exposed to both observations. Goodman develops the following grue example to demonstrate his point: Suppose that all observed emeralds have been green. Then we would readily induce that the next observed emerald would be green. But why green? Suppose "grue" is a term that applies to all observed green things or unobserved blue things. Then all observed emeralds have been grue as well. Yet none of us would induce that the next observed emerald would be blue even though there would be equivalent evidence for this induction. Goodman anticipates the objection that since "grue" is defined in terms of green and blue, green and blue are prior and more fundamental categories than grue. However, Goodman responds by pointing out that the latter is an illusion because green and blue can be defined in terms of grue and another term "bleen," where something is bleen just in case it is observed and blue or unobserved and green. Then "green" can be defined as something observed and grue or unobserved and bleen, while "blue" can be defined as something observed and bleen or unobserved and grue. Thus the new riddle of induction is not about what justifies induction, but rather, it is about why people make the inductions they do given that they have equal evidence to make several incompatible inductions? Goodman’s solution to the new riddle of induction is that people make inductions that involve familiar terms like "green," instead of ones that involve unfamiliar terms like "grue," because familiar terms are more entrenched than unfamiliar terms, which just means that familiar terms have been used in more inductions in the past. Thus statements that incorporate entrenched terms are “projectible” and appropriate for use in inductive arguments. Notice that Goodman’s solution is somewhat unsatisfying. While he is correct that some terms are more entrenched than others, he provides no explanation for why unbalanced entrenchment exists. In order to finish Goodman’s project, the philosopher Willard Van Orman Quine (1956-2000) theorizes that entrenched terms correspond to natural kinds. Quine (1969) demonstrates his point with the help of a familiar puzzle from the philosopher Carl Hempel (1905-1997), known as "the ravens paradox:" Suppose that observing several black ravens is evidence for the induction that all ravens are black. Then since the contrapositive of "All ravens are black" is "All non-black things are non-ravens," observing non-black things such as green leafs, brown basketballs, and white baseballs is also evidence for the induction that all ravens are black. But how can this be? Quine (1969) argues that observing non-black things is not evidence for the induction that all ravens are black because non-black things do not form a natural kind and projectible terms only refer to natural kinds (e.g. "ravens" refers to ravens). Thus terms are projectible (and become entrenched) because they refer to natural kinds. Even though this extended solution to the new riddle of induction sounds plausible, several of the terms that we use in natural language do not correspond to natural kinds, yet we still use them in inductions. A typical example from the philosophy of language is the term "game," first used by Ludwig Wittgenstein (1889-1951) to demonstrate what he called “family resemblances.” Look at how competent English speakers use the term "game." Examples of games are Monopoly, card games, the Olympic games, war games, tic-tac-toe, and so forth. Now, what do all of these games have in common? Wittgenstein would say, “nothing,” or if there is something they all have in common, that feature is not what makes them games. So games resemble each other although they do not form a kind. Of course, even though games are not natural kinds, people make inductions with the term, "game." For example, since most Olympic games have been in industrialized cities in the recent past, most Olympic games in the near future should occur in industrialized cities. Given the difficulty of solving the new riddle of induction, many philosophers have teamed up with mathematicians to investigate mathematical methods for handling induction. A prime method for handling induction mathematically is statistical inference, which is based on probabilistic reasoning. Instead of asking whether all ravens are black because all observed ravens have been black, statisticians ask what is the probability that ravens are black given that an appropriate sample of ravens have been black. Here is an example of statistical reasoning: Suppose that the average stem length out of a sample of 13 soybean plants is 21.3 cm with a standard deviation of 1.22 cm. Then the probability that the interval (20.6, 22.1) contains the average stem length for all soybean plants is .95 according to Student’s t distribution (Samuels and Witmer 2003, 189). Despite the appeal of statistical inference, since it rests on probabilistic reasoning, it is only as valid as probability theory is at handling inductive reasoning. Bayesianism is the most influential interpretation of probability theory and is an equally influential framework for handling induction. Given new evidence, "Bayes' theorem" is used to evaluate how much the strength of a belief in a hypothesis should change. There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of objectivism. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayes' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms. Edwin Jaynes, an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or "prior probabilities"; or in choosing likelihoods. He thus sought principles for assigning probabilities from qualitative knowledge. Maximum entropy – a generalization of the principle of indifference – and "transformation groups" are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions. "Cox's theorem," which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an inductive logic. Nevertheless, how well probabilistic inference handles Hume’s original problem of induction as well as Goodman’s new riddle of induction is still a matter debated in contemporary philosophy and presumably will be for years to come. - Barwise, Jon and John Etchemendy. 2000. Language, Proof and Logic. Stanford: CSLI Publications. - Goodman, Nelson. 1955. Fact, Fiction, and Forecast. Cambridge: Harvard University Press. - Hume, David. 2002. A Treatise of Human Nature (David F. and Mary J. Norton, eds.). Oxford: Oxford University Press. - Quine, W.V.O. 1969. Ontological Relativity and Other Essays. New York: Columbia University Press. - Samuels, Myra and Jeffery A. Witmer. 2003. Statistics for the Life Sciences. Upper Saddle River: Pearson Education. - Wittgenstein, Ludwig. 2001. Philosophical Investigations (G.E.M. Anscombe, trans.). Oxford: Blackwell. - Inductive Logic, Stanford Encyclopedia of Philosophy. Retrieved February 7, 2008. - Deductive and Inductive Arguments, The Internet Encyclopedia of Philosophy. Retrieved February 7, 2008. General philosophy sources - Stanford Encyclopedia of Philosophy. Retrieved February 7, 2008. - The Internet Encyclopedia of Philosophy. Retrieved February 7, 2008. - Philosophy Sources on Internet EpistemeLinks. Retrieved February 7, 2008. - Guide to Philosophy on the Internet. Retrieved February 7, 2008. - Paideia Project Online. Retrieved February 7, 2008. - Project Gutenberg. Retrieved February 7, 2008. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/entry/Induction_(philosophy)
13
63
General Chemistry w/Lab I Experiment: Density of a Liquid Mixture In this laboratory you will determine the density of liquid mixtures and use graphical techniques to determine the composition of an unknown mixture. You will learn to use MS Excel to graph data and determine the slope, and understand the difference between accuracy and precision as it applies to experimental data. When two liquids are mixed, how does the density of the solution compare to the density of the pure liquids? How does this density change as the percentage of each liquid changes? In this laboratory you will investigate these questions by determining how the density of an isopropanol/water mixture varies as the fraction of isopropanol increases. You will begin by using the known density of pure water to determine the volume of a pycnometer, and then determine the densities of pure isopropanol and several isopropanol/water solutions. The results will be graphed, and the relationship between the density of the solution and percent isopropanol will be determined. This graph can then be used to find the percent composition of an unknown mixture. For this experiment you are to work in pairs. The first task will be to determine the volume of the pycnometer. This is accomplished by weighing the pycnometer empty, filling it with deionized water and reweighing. From this data, the mass of water can be calculated. Then, using the known density of water (available in published tables) the volume of the water, and thus the volume of the flask, can be determined. The density of all remaining solutions are to be determined in a similar way. Once the volume of the pycnometer is accurately known, it can be filled with the desired solutions and weighed. Using the mass and volume of the unknown solutions, each density can be calculated. Each pair of students will be assigned a different isopropanol/water mixture. You will need to prepare this solution by mixing the appropriate amounts of isopropanol and water. You will also be given a solution with an unknown percentage of isopropanol, and are to determine its density, and from that, its composition. Lastly, every pair of students will determine the density of pure isopropanol. A graph of density vs percent isopropanol will be prepared from the class data and linear regression will be used to determine the slope and intercept. This can then be used to determine the composition of the unknown solution. Finally, we will judge the accuracy and precision of our methods by pooling the class results for the density of pure isopropanol. - Obtain a pycnometer. If it is clean and dry, proceed to step 2. If not, wash it carefully with soap and water and rinse it with deionized water. To dry, squirt a small amount of acetone inside, swirl to coat the sides, and discard the liquid in the waste container in the hood. Blowing air into the flask with an empty, dry plastic squeeze bottle will speed the drying process. - Weigh the clean, dry pycnometer with the cork inserted, and record the result. - Obtain a beaker of deionized water and record the temperature. Look up the density of water at this temperature and record the density in your data table in your notebook. - Fill the pycnometer with the deionized water. This requires a little care. First fill to just above the cap stem, tap gently to dislodge bubbles, and insert the cap. Water should squeeze out the top and no air bubbles should remain. - Dry the outside of the flask, weigh, and record the result. Calculate the volume of the pycnometer. Repeat the procedure a second time to ensure that you are achieving consistent results. - Record the solution you have been assigned to prepare. Figure out the amount of water and isopropanol to mix in order to prepare 60.0 mL of this solution. Mix the liquids in a graduated cylinder by stirring, and cap with a rubber stopper when you are done to minimize evaporation. - Fill the dry pycnometer completely with this solution and weigh. Use the known volume of the flask to calculate the density of your solution. Repeat the procedure a second time and record both results on the whiteboard. - Rinse the pycnometer twice with small amounts of your assigned unknown, then fill it with the mixture. Weigh the flask and contents, record the result, and calculate the density of the mixture. - Rinse the pycnometer twice with small amounts of pure isopropanol, then fill with pure isopropanol. Weigh the flask and contents, and record the result. Calculate the density of pure isopropanol, and record your result on the whiteboard. - Construct a graph (by hand) of density vs percent isopropanol for all class data obtained. Do NOT use the graph paper in your lab notebook; it is not accurate enough (although you may want to use it for a rough sketch). Instead, use the graph paper provided by your instructor. Staple the completed graph into your lab notebook. You must complete a graph by hand and have it approved by your instructor prior to using a computer to generate a graph. - Determine the slope and intercept of the handdrawn graph using standard (rise over run) methods. - Each student is to use MS Excel to build a spreadsheet containing the same data as the handdrawn graph, being sure to arrange the data in columns, with clear labels and proper units included at the top of the columns. Use the Format Cells command to format the data so that the correct number of significant figures are displayed. While constructing your spreadsheet pay attention to appearance as well as substance. Be sure to include you name in the title at the top of the spreadsheet, and print a copy for inclusion in each of your reports. - Use the Chart feature to prepare a graph of the data, and use the regression feature to determine the slope of the line. Print a copy of the graph to include in your lab report. Since the graphs will appear very similar it is essential that you place your name on the graph before printing. - Use your calculated slope and intercept (obtained in MS Excel using linear regression) to determine the composition of your unknown solution. - Enter the data for the pure isopropanol into another spreadsheet, and format as you did for the previous data set. Then use the formulas in MS Excel to compute the mean (average) value and standard deviation for the density of pure isopropanol. Include all class results in your calculation. Print a copy of this data table for inclusion in your report. - Use the CRC to look up the accepted value for the density of isopropanol. Calculate the percent error for both the class average and your result, and comment on the accuracy and the precision of the class result as well as your own. Writeup and Analysis Your report in your lab notebook should include: - Tables of all data collected, with correct units and significant figures. - Two graphs of density vs percent isopropanol, the first prepared by hand and the other using MS Excel. Be sure to label the axes and title the graphs. - A section showing all calculations, including the calculation for the slope of your handdrawn graph. All values should contain the correct units and significant figures. - Your calculation of the density of your unknown solution, and determination of its composition. - The class average for the density of pure isopropanol, the accepted value, the percent error and standard deviation of the class results. - Your analysis/discussion of the accuracy and precision of the class, and your own, results, and discussion of possible sources of error. Return to the Laboratory Experiments Schedule Copyright © 19992012 Green River Community College All Rights Reserved Last Revised: 1/2/12
http://www.instruction.greenriver.edu/knutsen/chem140/densmix.html
13
116
All aircraft are built with the same basic elements: wings to provide lift, engine(s) to provide motive power, a fuselage to carry the payload and controls, and a tail assembly which usually controls the direction of flight. These elements differ in shape, size, number, and position. The differences distinguish one aircraft type from another. Angle of Attack (AOA) The angle between the wing and the relative wind. When all else is held constant, an increase in AOA results in an increase in lift. This increase continues until the stall AOA is reached then the trend reverses itself and an increase in AOA results in decreased lift. Ailerons -- Located on the outer part of the wing, the ailerons help the airplane turn. Ailerons are control surfaces which are used to change the bank of the airplane, or roll the airplane. As the ailerons hinge down on one wing, they push the air downwards, making that wing tilt up. This tips the airplane to the side and helps it turn. This tipping is known as Banking. They are manipulated from the cockpit by moving the control column (stick) left and right. Right movement rolls the airplane to the right and vice versa. Roll speed is proportional to the amount of stick deflection. Once a desired bank is attained, the stick is centered to maintain the bank. Airfoil Section -- is the cross-sectional shape of the wing. The airfoil section shape and placement on the fuselage are directly linked to the airplanes performance. Angle of Attack. Bank -- The angle between the wings and the horizon, as viewed from the rear of the airplane. An airplane with its wings level has zero degrees of bank. Banking -- Pushing the control stick in the cockpit to the left or right makes the ailerons on one wing go down and the ailerons on the other wing go up. This makes the plane tip to the left or right. This is called Banking. Banking makes the plane turn. Like a bicycle, the plane tilts, or banks, as it turns. This process is also called Roll. Cockpit -- Where the pilot sits. All of the controls and instruments are located here. Control Stick -- The ailerons are connected to the Control Stick which is located in cockpit. Pushing the stick to the left or to the right makes the ailerons on one wing go down and the ailerons on the other wing go up. This makes the plane tip to the left or right. This is called banking. This tipping is also called roll. Drag -- One of the four basic principles of flight. Drag is the force encountered as an airplane pushes through the air, which tends to slow the airplane down. There are two types of drag, and an airplane must fight its way through both kinds of drag in order to maintain steady flight. Elevators -- The Elevators are movable flaps attached to the horizontal stabilizer used to change the angle of AOA of the wing which will, in turn, change the pitch, moving the airplane up and down. It is operated by moving the control stick forward or backward, which in turn moves the elevator down or up, respectively. When the pilot "moves the stick forward to make the trees bigger and back to make them smaller", it is the elevator that does the work. Engine -- This part of the plane produces thrust or forward movement necessary to sustain flight. Thrust is one of the four basic rules behind plane flight. The engine turns the propeller. Flaps -- Located on the inner part of the wing, the Flaps help the plane fly slower. This helps to increase the lifting force of the wing at slower speeds, like during takeoff and landing. These slower speeds make takeoff and landing distances shorter. The Flaps slide back and forth, and are controlled by a lever in the cockpit. Flaps are moved down from a streamlined position to increase the amount of lift produced at a particular airspeed. - Profile or parasite drag is the same kind of drag experienced from all objects in a flow. Cars, rocks, and hockey pucks must all overcome profile drag. This type of drag is caused by the airplane pushing the air out of the way as it moves forward. This drag can easily be experienced by putting your hand out the window of a moving vehicle (experienced en masse if your hand encounters something more dense than air). - The other type, called "induced drag," is the result of the production of lift (you can't get something for nothing!). This drag is the part of the force produced by the wing that is parallel to the relative wind. Objects that create lift must also overcome this induced drag, also known as drag-due-to-lift. Skin friction is a function of the surface area wetted by the airstream. Any increase in surface area will increase skin friction drag. The other component of profile drag is pressure drag. Pressure drag is a function of the size of the wake behind an object in an airstream; it can be reduced by streamlining the object in order to delay separation of the flow. A side effect of streamlining is an increase in the wetted (exposed) area and hence the skin friction, so it is important to ensure that a net reduction in drag is actually achieved when adding streamlining. Fuselage -- The Fuselage is the central "body" of the plane. The wings, tail and engines are all attached to it. In a modern passenger airplane, you sit only in the top half of the Fuselage. The Fuselage also houses the cockpit where all the controls necessary for operating and controlling the plane are located. Cargo is also housed in the bottom half of the Fuselage. The Fuselage is generally streamlined as much as possible. Horizontal Stabilizer -- The horizontal stabilizer is a fixed position airfoil that stabilizes the pitch of the airplane. When a wing produces lift, it also develops a force that tries to pitch the airplane forward. The horizontal stabilizer prevents this unwanted pitch from occurring. Gravity -- Gravity is the attractive force from the earth that acts upon all mass. It is one of the four principles of flight. Landing Gear -- On conventional aircraft, the Landing Gear consists of wheels or tires with supports (struts) and shock absorbers which help in takeoff and landing. To reduce drag while the plane is flying, most wheels fold up into the body of the plane after takeoff. On many smaller aircraft, the wheels do not fold up after takeoff. Lift -- An upward force that causes an object to rise. In aircraft it may be produced by downward-facing propellers, or by a moving wing with an airfoil shape (the specially curved shape of an airplane wing). Lift is one of the four basic principles of flight. Forces are produced by the wing as the air flows around it. Lift is the part that is perpendicular to the relative wind. The other part contributes to drag. Pitch -- The angle between the airplane's body (lengthwise) and the ground. An airplane going straight up would have a pitch attitude of ninety degrees and one in level flight, about zero degrees. Relative Wind -- The direction that the air is going as it passes the airplane relative to the airplane. Relative wind has nothing to do with the wind speed on the ground.Propeller -- This part of the plane produces thrust or forward movement necessary to sustain flight. This turning blade on the front of an airplane moves it through the air. Roll -- Roll is the tilting motion the airplane makes when it turns. Rudder -- The Rudder, controlled by the rudder pedals, is the hinged part on the back of the tail which helps to turn the aircraft. It is the vertical part of the tail which controls the sideways movement of the airplane, called the yaw. The least used of all controls, most flying can be safely accomplished without it. (One exception is landing with a crosswind; yaw induced by the rudder must be used to keep the fuselage aligned with the runway and prevent an excursion into the grass.) Stall -- What a wing does when a given angle of attack is exceeded (the stall angle of attack). The stall is characterized by a progressive loss of lift for an increase in angle of attack.Tail -- The Tail has many movable parts. The pilot controls these parts from the cockpit. Included in the parts on the Tail are the rudder and the elevators. Thrust The force produced by the engines, thrust works opposite of and counteracts drag. Thrust is the forward movement that is necessary to sustain flight. It is one of the four basic principles of flight. Trim -- When the controls are moved from neutral, it takes a certain amount of pressure to hold them in position in the airflow. Trim gets rid of this pressure and effectively changes the "center" of the controls - or the neutral position where there is no stick pressure.Vertical Stabilizer -- The vertical stabilizer is the yaw stabilizer for the airplane; it keeps the nose of the airplane (as seen from above) pointed into the relative wind. Weight -- The force produced by the mass of the airplane interacting with the earth's gravitational field; the force that must be counteracted by lift in order to maintain flight. Wing -- The Wings are the "arms" of the airplane. They provide the principal lifting force of the airplane. They hold the plane aloft by creating lift from the air rushing over them. Like all plane parts, the Wings should be light and strong, but also flexible to absorb sudden gusts of wind. Yaw -- The angle between the fuselage of the airplane and the relative wind as seen from above the airplane. Yaw is the term pilots use to describe the turning left or right of the plane. Yaw is the sideways movement of the plane. Normally an airplane is flown without yaw. - Basic Weight - The weight of the basic aircraft plus guns, unusable fuel, oil, ballast, survival kits, oxygen, and any other internal or external equipment that is on board the aircraft and will not be disposed of during flight. - Operating Weight - Is the sum of basic weight and items such as crew, crew baggage, steward equipment, pylons and racks, emergency equipment, special mission fixed equipment, and all other nonexpendable items not in basic weight. - Gross Weight - Is the total weight of an aircraft, including its contents and externally mounted items, at any time. - Landing Gross Weight - Is the weight of the aircraft, its contents, and external items when the aircraft lands. - Zero Fuel Weight (ZFW) - Is the weight of the aircraft without any usable fuel. (This is due to structural limitations of aircraft) WingsLift is the aerodynamic force that supports an aircraft in flight, due to the airflow over the wings or body. Drag is the resistance a vehicle moving through the air experiences, and pitching moments are a result of aerodynamic forces that make the nose of an aircraft move either up or down. The shape of a wing looks like an elongated water drop laying on its side. This shape is referred to as an airfoil. Usually the top is curved more than the bottom making the upper surface slightly longer than the bottom. Since air passing over the top and bottom must reach the rear of the wing at the same time, the air passing over the top must not only travel faster, but also changes direction and is deflected downward. This actually results in lift being generated due to a rate of change of vertical momentum and a difference in static pressure between the top and bottom of the wing. The production of lift is probably the most important topic in the science of aerodynamics. It is a wing's ability to efficiently produce a force perpendicular to the air passing over it that makes heavier-than-air flight possible. In the big picture, all wings produce lift the same way - they push down on the air, forcing the air downward relative to the wing. It is this force that we call lift. Many different types of shapes do this, but the shapes built specifically for this purpose are called "airfoils ." The wing makes its "magic" by forcing the air down. Some people like to compare it to water skiing, where water skis and speed are used to force the water down and the skier up. But that analogy tells only part of the story. Most of the time, the top of the wing does the majority of the "pushing" on the air (actually, in this case, "pulling" the air down). The top and the bottom of the wing combine to produce a force, and the part of this force perpendicular to the relative wind is lift. Since the wing not only pushes the air down but slows it down as well, some drag (induced drag) is caused. The chord line is an imaginary line drawn from the leading edge to the trailing edge of an airfoil. Secondly, the relative wind is the airflow which acts on the airfoil and is paralell to but opposite the direction of flight. The angle between the chord line and the relative wind is called the angle of attack, which is called "alpha." As the angle of attack increases, the change of vertical momentum increases. Additionally, as the angle of attack increases, the coefficient of lift (CL) increases. The result is an increase in lift. However, there are limits to how much the angle of attack can be increased. At some higher angle of attack, the lift coefficient begins to decrease. The angle of attack where the lift coefficient begins to decrease is called the critical angle of attack. Once the critical angle is exceeded, the wing can no longer produce enough lift to support the weight of the aircraft and the wing is said to be "stalled." In other words, the aircraft will stall when the critical angle of attack is exceeded. Lift and Drag A wing must be at a high enough AOA to deflect the air downward and produce the desired lift. The pilot uses the elevators to change the angle of attack until the wings produce the lift necessary for the desired maneuver. Other factors are involved in the production of lift besides the AOA. These factors are relative wind velocity (airspeed) and air density (temperature and altitude). Changing the size or shape of the wing (lowering the flaps) will also change the production of lift. Airspeed is absolutely necessary to produce lift. If there is no airflow past the wing, no air can be diverted downward. At low airspeed, the wing must fly at a high AOA to divert enough air downward to produce adequate lift. As airspeed increases, the wing can fly at lower AOAs to produce the needed lift. This is why airplanes flying relatively slow must be nose high (like an airliner just before landing or just as it takes off) but at high airspeeds fly with the fuselage fairly level. The key is that the wings don't have to divert fast moving air down nearly as much as they do to slow moving air. As an airplane in flight slows down, it must continually increase its pitch attitude and AOA to produce the lift necessary to sustain level flight. At high AOAs, the top of the wing diverts the air through a much larger angle than at low AOAs. As the AOA increases, a point will be reached where the air simply cannot "take" the upper curve over the entire distance of the top of the wing, and it starts to separate. When this point is reached, the wing is not far from stalling. The airflow unsticks further up the wing as the AOA increases. The top of the wing still contributes to the production of lift, but not along its entire curve. As the airspeed slows or as the angle of attack, or both, is increased further, the point is reached where, because of this separation, an increase in the AOA results in a loss of lift instead of an increase in lift. Thus, the wing no longer produces sufficient lift and the airplane that the wing is supporting accelerates downward. This is the stall. Air density also contributes to the wing's ability to produce lift. This is manifested primarily in an increase in altitude, which decreases air density. As the density decreases, the wing must push a greater volume of air downward by flying faster or push it down harder by increasing the angle of attack. This is why aircraft that fly very high must either go very fast like the SR-71, capable of flying Mach 3 (three times the speed of sound), or must have a very large wing for its weight, like the U-2. Wing Approaching the Stall EFFECTS OF CONTROL MOVEMENTSKnowing what happens when the controls are operated is the most basic skill of piloting. It is also among the most misunderstood. When an airplane is flying, it has a good deal of forward speed and airflow over all of its surfaces. Control movements must be understood in terms of this airflow and its effects. The elevator controls the Angle of Attack [AOA] of the wings, and subsequently the pitch. Pulling back on the stick results in a down force on the tail (the same thing is operating here that was operating on the wings, only in a different direction). If the controls are reversed, the opposite happens. Effects of Back Stick Movement Backward stick movement forces the tail down and the nose up. This rotation occurs around the center of gravity of the airplane. Initially the airplane, even though its nose is up, is still headed in the same direction - the only thing that has changed is the angle of attack. But an increase in the angle of attack results in an increase in lift, so now the airplane starts to go up. Then, like an arrow, it points into the wind, increasing its pitch. This process continues, viewed from the cockpit as an increase in pitch, until the pilot moves the stick forward to a neutral position and stabilizes the pitch. The temptation to think that the stick directly raises or lowers the nose is very strong, and most of the time, roughly correct. But if the stick is moved back when the airplane is very close to the stall the aircraft will not pitch up much, if at all. This back stick movement and increase in AOA will stall the wing, causing a loss of lift and acceleration downward: now the pitch moves opposite the stick movement. The ailerons are a much simpler control than the elevator. Located near the wing tips on the trailing edge of the wing, they are used in unison to change the amount of lift each wing is producing and roll the airplane. When the pilot moves the stick side-to-side from center, the ailerons move in opposite directions. In a roll to the right (as viewed from the cockpit), the right aileron goes up and the left aileron goes down. Each aileron serves to change how that part of the wing deflects the air and thus increases or decreases the amount of lift produced by each wing. The down aileron forces the air down harder, resulting in an increase in lift and the up aileron decreases the downward force, resulting in a decrease in lift. In the case of a right roll, the decreased lift on the right side and increased lift on the left side result in a roll to the right. Operating the ailerons causes an effect called adverse yaw. Adverse yaw is the result of an increase in drag on the wing with the down aileron, or "upgoing" wing. This wing, since it is forcing the air down harder than the "downgoing" wing and producing more lift, also produces more drag. The drag pulls the wing back and causes yaw. If this yaw is not corrected with rudder, the roll is said to be "uncoordinated."The Rudder The rudder is controlled by the "rudder pedals" located on the floor of the aircraft. They are both connected to the rudder so that when one or the other pedals is depressed, it moves the rudder in the desired direction. The rudder, connected to the vertical stabilizer, then starts to deflect air much like a wing, only the resulting force is to the side. This force causes a change in yaw. As mentioned earlier, the rudder is not used very often, but when it is needed (e.g., in a crosswind), its presence is appreciated. EnginesAn engine produces a force which acts toward the rear of the aircraft which "thrusts" the aircraft forward. For this reason, the force produced by the engine is called thrust. Thrust is the most important force acting on an aircraft, because regardless of the type of aircraft, ALL need some type of thrust to propel them aloft. Even unpowered aircraft such as gliders need a tow plane to provide an external force to pull the aircraft into the air, where it can obtain airflow over the wings to provide the necessary lift to remain airborne. Hang gliders use foot power to initiate movement prior to "leaping" off a cliff. The most common means of developing thrust on powered airplanes comes from propellers or jets. Whether an aircraft has a propeller, a turbojet, or a turbofan, all of these produce thrust by accelerating a mass of air to the rear of the aircraft. The movement of this air to the rear creates an unbalanced force pushing the aircraft forward. The Wright brothers made many important things come together for their historic first heavier-than-air flight. One of the most vital was an engine that efficiently produced thrust while not weighing too much. They used propellers - the only effective means available of transferring an internal combustion engine's output into push or pull for the airplane. Propellers are essentially revolving wings situated so that the lift they produce is used to pull or push the airplane. Most modern high-speed aircraft use a very different type of engine - the jet engine. Jet engines not only look different from propellers, they operate in a very different manner as well. More like rocket engines, jets produce thrust by burning propellant (jet fuel mixed with air) and forcing the rapidly expanding gases rearward. In order to operate from zero airspeed on up, jets use enclosed fans on a rotating shaft to compress the incoming air (and suck it in if the airplane is not going very fast) and send it into the combustion chamber where the fuel is added and ignited. The burning gases keep the shaft turning by rotating a fan before exiting the engine. Some other jet engines differ from this basic pattern by the way they compress the incoming air. Instead of forcing it down a restricting tube, the tweet's centrifugal flow compressor literally flings the air outward into the compressor section exit, compressing it against the outside wall. In a turbojet engine, the inlet area is small when compared to that of a propeller. As the air exits the compressor section of the engine, it enters the combustion chamber where fuel is added. This densely packed air/fuel mixture is ignited and the resultant "explosion" accelerates the gases out the rear of the engine at a very high rate of speed. This chemical acceleration of the air (combustion) adds to the thrust produced by the engine. Most jet fighters have a system called afterburners, which adds raw fuel into the hot jet exhaust generating even more thrust through higher accelerations of the air. The jet generates large amounts of thrust by chemically accelerating the air as the result of combustion. The fact that the jet compresses the air as much as 40 times (depending upon the number of compressor rings) allows the jet aircraft to fly at higher altitudes where the air is too thin for Since the fan is mounted to the same shaft as the core, the by-pass ratio of these engines is determined by dividing the amount of air flowing through the fan blades by the amount of air passing through the engine core. Centrifugal Flow Jet Engine (T-37) The engine thrust is controlled by a throttle - one for each engine. As the throttle is moved forward, more fuel is added and the engine rotates faster and produces more thrust. Thrust is also directly related to engine revolutions per minute (RPM); the amount of thrust is often referred to as percentage RPM.There is a price to pay for the ability to fly at higher speeds and altitudes. That price comes in the form of higher fuel consumption, or is more everyday terms, lower fuel mileage. As a propeller blade turns faster, the tips begin to reach supersonic speeds. At these tip speeds, shock waves begin to develop and destroy the effectiveness of the prop. It would seem, therefore that the most efficient engine would be a combination of the turbojet and a large, slow turning prop. In recent years, these engines have been developed and are called "high by-pass ratio turbofans." The engines use a turbojet as a "core" to serve two purposes: 1) produce a portion of the total thrust, and 2) to turn a huge fan attached to the main shaft. The engine can operate at higher altitudes because the jet core can compress the thin air. The thrust produced by the core is supplemented by having a VERY large fan section attached to the main shaft of the core. The fan draws in huge amounts of air and therefore can turn slow enough to prevent the flow at the blade tips from becoming supersonic. The overall result is: 1) the fan mechanically generates a little acceleration to a large amount of air mass, and 2) the jet core compresses thin air and chemically generates large accelerations to relatively small amounts of air. The wings are not the only "lifting surfaces" on an airplane. The horizontal and vertical stabilizers are lifting surfaces as well and use aerodynamic lift for the purpose of changing aircraft attitude and maintaining stable flight. Some aircraft also use the fuselage to produce lift (the F-16 is a good example). An understanding or at least "intuitive feel" for the production of lift is essential for safe piloting. Many would-be pilots have been killed because, when encountering an unexpected stall fairly close to the ground, they did not act to get the wing flying again (stick forward to decrease the angle of attack below the stall angle of attack) before attempting to pull away from the ground. Aircraft PerformancePerformance generally refers to the motion of the airplane along its flight path, fore and aft, up or down, right or left. The term "Performance" also refers to how fast, how slow, how high and how far. It may also refer, in general sense, to the ability of an airplane to successfully accomplish the different aspects of its mission. Included are such items as minimum and maximum speed, maximum altitude, maximum rate of climb, maximum range and speed for maximum range, rate of fuel consumption, takeoff and landing distance, weight of potential payload, etc. There are specific maneuvers which are used to measure and quantify these characteristics for each airplane. In many cases, flight testing takes place in a competitive environment to select the best airplane for accomplishing a particular mission. Since all of these performance measurements are strongly affected by differences in the weather conditions (that is, temperature, pressure, humidity, winds), there are some very specific and complex mathematical processes which are used to "standardize" these values. One of the most important considerations in flight is the balance of forces maintained between thrust, drag, lift, and weight. Balance of Forces An aircraft in flight retains energy in two forms; kinetic energy and potential energy. Kinetic energy is related to the speed of the airplane, while potential energy is related to the altitude above the ground. The two types of energy can be exchanged with one another. For example when a ball is thrown vertically into the air, it exchanges the kinetic energy (velocity imparted by the thrower), for potential energy as the ball reaches zero speed at peak altitude. When an airplane is in stabilized, level flight at a constant speed, the power has been adjusted by the pilot so that the thrust is exactly equal to the drag. If the pilot advances the throttle to obtain full power from the engine, the thrust will exceed the drag and the airplane will begin to accelerate. The difference in thrust between the thrust required for level flight and the maximum available from the engine is referred to as "excess thrust". When the airplane finally reaches a speed where the maximum thrust from the engine just balances the drag, the "excess thrust" will be zero, and the airplane will stabilize at its maximum speed. Notice that this "excess thrust" can be used either to accelerate the airplane to a higher speed (increase the kinetic energy) or to enter a climb at a constant speed (increase the potential energy), or some combination of the two. Excess Thrust Energy Exchange There are energy exchange equations which can be used to relate the rate of change of speed (or acceleration) to the rate of change of altitude (or rate of climb). (These equations are introduced later.) In this way, level flight accelerations (accels.) at maximum power can be used to measure the "excess thrust" over the entire speed range of the airplane at one altitude. This "excess thrust" can then be used to calculate the maximum rate of climb capability for an aircraft. TakeoffThe takeoff is a critical maneuver in any airplane. The airplane will usually be carrying a payload (passengers, cargo, weapons) and often a full load of fuel. The resulting heavy weight means that a high speed must be reached before the wings can generate sufficient lift, thus a long distance must be travelled on the runway before lift-off. After lift-off, the heavy weight will result in a relatively slow acceleration to the speed for best angle of climb. After lining the aircraft up on the runway, the pilot applies the brakes (accomplished by applying pressure to the top of the rudder pedals - each pedal controls its respective wheel). The throttles are then advanced to military power (100% RPM). As the engines wind up, the engines and instruments are given a "last minute" check. (Pilots do a lot of "checks" to ensure that everything is going OK. After all, if something were to happen, you can't just pull off to the side of the road!) When everything is ready, the brakes are released and the airplane accelerates down the runway. At a pre-determined speed, the pilot pulls back on the stick to pitch the airplane upward about five degrees. Although the nose wheel is off the ground, the main gear remains on the runway because there is not yet enough airflow over the wings to create sufficient lift to raise the aircraft. After a little while, the airplane reaches the speed (90 knots) at which its wings produce lift slightly greater than its weight and it takes off. While the airplane climbs away from the runway the pilot must raise the landing gear (this decreases the drag) and the flaps, then let it accelerate to the desired climb speed. Once this speed is reached, it is maintained by raising the nose slightly and "trimming" off all control stick pressures. Straight and Level Flight If an airplane maintains a given altitude, airspeed, and heading, it is said to be in "straight and level flight." This condition is achieved and maintained by equalizing all opposing forces. Lift must equal weight so the airplane does not climb or descend. Thrust must equal drag so the airplane does not speed up or slow down. The wings are kept level so the airplane does not turn. Any imbalance will result in a change in altitude or airspeed. It is the pilot's responsibility to prevent or correct for such an imbalance. Proper trim is essential for maintaining this balance. If the pilot, by being "out of trim," is forced to maintain a given amount of stick pressure, the arm holding the stick will eventually tire. But in the short term the pilot must very precisely hold that pressure -- any change will result in a change in attitude. If the airplane is properly trimmed, the correct stick position is held automatically, and no pressure need be exerted. Obviously, an airplane cannot remain indefinitely in this ideal condition. Due to mission, airspace, and fuel requirements, the pilot must change the airspeed, altitude, and heading from time to time. Speeding up and slowing down is not simply a matter of changing the throttle setting (changing the force produced by the engines). Airspeed can also be changed by changing the drag. Many aircraft are equipped with a "speedbrake" for this purpose -- a large metal plate that can be extended out into the windstream, increasing parasite drag and slowing the airplane. As an airplane speeds up or slows down, the amount of air passing over the wing follows suit. For instance, to maintain a constant altitude as the airspeed is decreasing, the pilot must compensate for this decreased airflow by changing the AOA (pulling back on the stick) to equalize the amount of lift to the weight of the airplane. All this works nicely until stall speed is reached, when an increase in AOA is met with a decrease in lift, and the airplane, its weight not completely countered by lift, begins to dramatically lose altitude. Conversely, an increase in airspeed must be met with a decrease in the AOA (moving the stick forward) to maintain a constant altitude. As airspeed increases or decreases, trim must be changed as well.Mach number is the most influential parameter in the determination of range for most jet-powered aircraft. The most efficient cruise conditions occur at a high altitude and at a speed which is just below the start of the transonic drag rise. The drag (and thus the thrust required to maintain constant Mach number) will change as the weight of the airplane changes. The angle of attack (and thus the drag) of an airplane will become slightly lower as fuel is used since the airplane is becoming lighter and less lift is required to hold it up. Climbs and descents are accomplished by using power setting respectively higher or lower than that required for level flight. When an airplane is in level flight, just reducing the power begins descent. Instead of pulling back on the stick to maintain altitude as the airspeed slows, the pilot keeps the stick neutral or pushes it forward slightly to establish a descent. Gravity will provide the force lost by the reduction in power. Likewise, increased power results in a climb. Airspeed can be controlled in a climb or descent without changing the throttle setting. By pulling back on the stick and increasing the climb rate or by decreasing the descent rate, the airspeed can be decreased. Likewise, lowering the nose by pushing forward on the stick will effectively increase the airspeed. In most climbs and descents, this is the way airspeed is maintained. A constant throttle setting is used and the pilot changes pitch in small increments to control airspeed. If the pilot were to fly a climb such that the airplane was at the best-climb speed as it passed through each altitude, it would be achieving the best possible rate of climb for the entire climb. This is known as the "best-climb schedule" and is identified by the dotted line. Flying the best-climb schedule will allow the airplane to reach any desired altitude in the minimum amount of time. This is a very important parameter for an interceptor attempting to engage an incoming enemy aircraft. For an aircraft that is equipped with an afterburner, two best climb schedules are determined; one for a Maximum Power climb (afterburner operating) and one for a Military Power climb (engine at maximum RPM but afterburner not operating). The Max Power climb will result in the shortest time but will use a lot of fuel and thus will be more useful if the enemy aircraft is quite close. The Mil. Power climb will take longer but will allow the interceptor to cruise some distance away from home base to make the intercept. For cargo or passenger aircraft the power setting for best climb is usually the maximum continuous power allowed for the engines. By flying the best-climb schedule the airplane will reach it's cruise altitude in the most efficient manner, that is, with the largest quantity of fuel remaining for cruise. RangeOne of the most critical characteristics of an airplane is its range capability, that is, the distance that it can fly before running out of fuel. Range is also one of the most difficult features to predict before flight since it is affected by many aspects of the airplane/engine combination. Some of the things that influence range are very subtle, such as poor seals on cooling doors or small pockets of disturbed air around the engine inlets. The aerodynamics of a turn widely misunderstood, since many people think that the airplane is "steered" by the stick or the rudder pedals (probably the result of thinking of the airplane as a sort of "flying car.") A turn is actually the result of a change in the direction of the lift vector produced by the wings. A pilot turns an airplane by using the ailerons and coordinated rudder to roll to a desired bank angle. As soon as there is bank, the force produced by the wings (lift) is no longer straight up, opposing the weight. It is now "tilted" from vertical so that part of it is pulling the airplane in the direction of the bank. It is this part of the lift vector that causes the turn. Once the pilot has established the desired bank angle, the rudder and the aileron are neutralized so that the bank remains constant. When part of the lift vector is used for turning the airplane, there is less lift in the vertical opposing weight. If the pilot were to establish a bank angle without increasing the total amount of lift being produced, the lift opposing the weight would decrease, and the resulting imbalance would cause in a descent. The pilot compensates by pulling back on the stick (increasing the AOA and therefore lift). By increasing the total lift, the lift opposing the weight can balance out the weight and control level flight. This increase in total lift also increases lift in the turn direction and results in a faster turn. Turn Lift Requirements As the bank angle increases, the amount of pull required to maintain level flight increases rapidly. It is not possible to maintain level flight beyond a given bank angle because the wings cannot produce enough lift. An attempt to fly beyond this point will result in either a stall or a descent. Physiologically speaking, the most important part of a turn is the necessity to pull "Gs". As the back pressure is increased to maintain level flight, the increased force is felt as an increase in "G" level. In a 30 degree bank, 1.2 G is required to maintain level flight. The G level increases rapidly with an increase in bank; at 60 degrees, it goes to 2.0 G, and it takes 9.0 G to fly a level 84 degree bank turn. As long as there is enough airspeed, the G level can be increased in any bank angle by pulling back on the stick. Finishing the turn, a simple matter of leveling the wings by using the ailerons and coordinated rudder, takes time; the airplane continues turning until the wings are level, so the roll-out must be started a little prior to reaching the desired heading. Back-stick pressure must also be released as bank decreases or the aircraft will climb. ManeuverabilityAirplanes are not limited to being a relatively fast means of getting somewhere. Long ago thrill-seeking pilots discovered that aircraft have the potential for providing loads of fun while getting nowhere fast. Aerobatics are an essential skill for fighter pilots; and the training that it gives to pilots in position orientation and judgment is considered so vital that a great deal of time is spent teaching these maneuvers. Maneuverability is defined as the ability to change the speed and flight direction of an airplane. A highly maneuverable airplane, such as a fighter, has a capability to accelerate or slow down very quickly, and also to turn sharply. Quick turns with short turn radii place high loads on the wings as well as the pilot. These loads are referred to as "g forces" and the ability to "pull g's" is considered one measure of maneuverability. One g is the force acting on the airplane in level flight imposed by the gravitational pull of the earth. Five g in a maneuver exerts 5 times the gravitational force of the earth. Aileron Roll The aileron roll is simply a 360 degree roll accomplished by putting in and maintaining coordinated aileron pressure. The maneuver is started slightly nose high because, as the airplane rolls, its lift vector is no longer countering its weight, so the nose of the airplane drops significantly during the maneuver. Back stick pressure is maintained throughout so that even when upside down, positive seat pressure (about 1 G) will be felt. As the airplane approaches wings-level at the end of the maneuver, aileron pressure is removed and the roll stops. Loop A loop is simply a 360 degree change in pitch. Because the airplane will climb several thousand feet during the maneuver, it is started at a relatively high airspeed and power setting (if these are too low, the airspeed will decay excessively in the climb and the maneuver will have to be discontinued.) The pilot, once satisfied with the airspeed and throttle setting, will pull back on the stick until about three Gs are felt. The nose of the airplane will go up and a steadily increasing climb will be established. As the maneuver continues, positive G is maintained by continuing to pull. The airplane continues to increase its pitch until it has pitched through a full circle. When the world is right-side-up again, the pilot releases the back stick pressure and returns the aircraft to level flight. MISTAKESAny time you place yourself in a several thousand pound machine and force it to travel through the air at high speeds and altitudes, there is going to be some risk. Many think that the primary risk in flying is mechanical failure or weather. Contrary to this belief, most airplanes (even those made of cloth and wood) that crash do so as a result of pilot error --frequently from attempting to fly too slow! The stall is the initial result of letting the airspeed decay below what is required for the wings to produce sufficient lift. With insufficient lift to counteract aircraft weight, the airplane is not being "held up" by the wings any more and it accelerates toward the ground. At low altitude, the stall can be immediately disastrous but with enough altitude below, the pilot can take action to recover. Recovery from the stall is accomplished by correcting the condition that led to it. Since the stall is caused by attempting to fly at too high an AOA, the pilot must immediately reduce the AOA by moving the stick forward. At the same time, the throttle is advanced to full power to rapidly increase the airspeed needed for a return to level flight or climb. Aircraft are almost always designed to give some warning prior to a stall. In very large aircraft, special sensors detect the impending stall and physically shake the control stick. Cessna uses a buzzer located in the wing root for its light aircraft. High-performance aircraft have a horizontal stabilizer placed so that, as a stall is approached, the turbulent air coming off the top of the wing hits the horizontal stabilizer and shakes the flight controls. In extreme conditions, the whole airplane will shake. These warnings are difficult to ignore; they give the pilot sufficient time to act to prevent the stall. If a stall is maintained and yaw is somehow induced, a spin can result. Spins can be recognized by high descent and roll rates, and a flight path that is straight down. Clearly, this is a situation to be entered with some forethought. Harder to recover from than a stall, and much more dangerous in terms of altitude loss, the spin is an extremely complex maneuver and beyond the scope of this text. The good news is that if you do not stall, you cannot spin. "All good things must come to an end," and most flights end with a landing. The relative difficulty of this maneuver is often expressed by a student pilot after the first solo flight: "The first thought that came to mind after I took off was `Oh boy, now I've gotta land this thing!'" After lining the airplane up with the runway and configuring it properly (landing gear, proper flap setting, speedbrake out), the pilot uses the throttle setting to maintain the proper airspeed (100 knots) and uses the elevators and ailerons to keep the airplane headed for the runway. The airplane is set up in a shallow descent (about three degrees) aimed at the near end of the runway. If this part of the landing, the "final approach" is flown correctly, it will look like the jet is headed for a collision with the approach end of the runway. As the airplane closes in on the approach end, the pilot begins to ease the stick back to level off the airplane several feet above the runway and slows to landing speed by reducing the power to idle. As the airplane levels off just above the ground in idle power, it will lose speed rapidly because there is little or no thrust to counter the drag. The pilot continues to move the stick back to increase the AOA and keep the airplane flying for just a little while longer. In a well-flown landing, the airplane will settle to the ground just before the stall AOA is reached. Now a land-based vehicle, the airplane is controlled with the brakes and slowed to taxi speed. The Axis SystemA good understanding of the basic axis system used to describe aircraft motion is necessary to appreciate flight data. Aircraft translational motion is described in terms of motion in three different directions, each direction being perpendicular to the other two (orthogonal). Motion in the X direction is forward and aft velocity. The Y direction produces sideways motion to the left and right, and up and down motion is in the Z direction. The rotational motion of an aircraft can be described as rotation about the same three axes; pitch rotation (nose up or nose down) is about the y axes, lateral or roll rotation (one wing up or down) is about the x axis, and yaw rotation (nose right or left) is about the z axis. There are several slightly different versions of the basic axis system just described. They differ primarily in the exact placement of the zero reference lines, but are generally similar in their directions. (For example, the body-axis system uses the fuselage center line as the x axis, while a wind-axis system uses the direction that the aircraft is moving through the air as the x axis.)
http://www.fas.org/man/dod-101/sys/ac/intro.htm
13
406
Original author: Dennis Crunkilton Shift registers, like counters, are a form of sequential logic. Sequential logic, unlike combinational logic is not only affected by the present inputs, but also, by the prior history. In other words, sequential logic remembers past events. Shift registers produce a discrete delay of a digital signal or waveform. A waveform synchronized to a clock, a repeating square wave, is delayed by "n" discrete clock times, where "n" is the number of shift register stages. Thus, a four stage shift register delays "data in" by four clocks to "data out". The stages in a shift register are delay stages, typically type "D" Flip-Flops or type "JK" Flip-flops. Formerly, very long (several hundred stages) shift registers served as digital memory. This obsolete application is reminiscent of the acoustic mercury delay lines used as early computer memory. Serial data transmission, over a distance of meters to kilometers, uses shift registers to convert parallel data to serial form. Serial data communications replaces many slow parallel data wires with a single serial high speed circuit. Serial data over shorter distances of tens of centimeters, uses shift registers to get data into and out of microprocessors. Numerous peripherals, including analog to digital converters, digital to analog converters, display drivers, and memory, use shift registers to reduce the amount of wiring in circuit boards. Some specialized counter circuits actually use shift registers to generate repeating waveforms. Longer shift registers, with the help of feedback generate patterns so long that they look like random noise, pseudo-noise. Basic shift registers are classified by structure according to the following types: Above we show a block diagram of a serial-in/serial-out shift register, which is 4-stages long. Data at the input will be delayed by four clock periods from the input to the output of the shift register. Data at "data in", above, will be present at the Stage A output after the first clock pulse. After the second pulse stage A data is transfered to stage B output, and "data in" is transfered to stage A output. After the third clock, stage C is replaced by stage B; stage B is replaced by stage A; and stage A is replaced by "data in". After the fourth clock, the data originally present at "data in" is at stage D, "output". The "first in" data is "first out" as it is shifted from "data in" to "data out". Data is loaded into all stages at once of a parallel-in/serial-out shift register. The data is then shifted out via "data out" by clock pulses. Since a 4- stage shift register is shown above, four clock pulses are required to shift out all of the data. In the diagram above, stage D data will be present at the "data out" up until the first clock pulse; stage C data will be present at "data out" between the first clock and the second clock pulse; stage B data will be present between the second clock and the third clock; and stage A data will be present between the third and the fourth clock. After the fourth clock pulse and thereafter, successive bits of "data in" should appear at "data out" of the shift register after a delay of four clock pulses. If four switches were connected to DA through DD, the status could be read into a microprocessor using only one data pin and a clock pin. Since adding more switches would require no additional pins, this approach looks attractive for many inputs. Above, four data bits will be shifted in from "data in" by four clock pulses and be available at QA through QD for driving external circuitry such as LEDs, lamps, relay drivers, and horns. After the first clock, the data at "data in" appears at QA. After the second clock, The old QA data appears at QB; QA receives next data from "data in". After the third clock, QB data is at QC. After the fourth clock, QC data is at QD. This stage contains the data first present at "data in". The shift register should now contain four data bits. A parallel-in/parallel-out shift register combines the function of the parallel-in, serial-out shift register with the function of the serial-in, parallel-out shift register to yield the universal shift register. The "do anything" shifter comes at a price– the increased number of I/O (Input/Output) pins may reduce the number of stages which can be packaged. Data presented at DA through DD is parallel loaded into the registers. This data at QA through QD may be shifted by the number of pulses presented at the clock input. The shifted data is available at QA through QD. The "mode" input, which may be more than one input, controls parallel loading of data from DA through DD, shifting of data, and the direction of shifting. There are shift registers which will shift data either left or right. If the serial output of a shift register is connected to the serial input, data can be perpetually shifted around the ring as long as clock pulses are present. If the output is inverted before being fed back as shown above, we do not have to worry about loading the initial data into the "ring counter". Serial-in, serial-out shift registers delay data by one clock time for each stage. They will store a bit of data for each register. A serial-in, serial-out shift register may be one to 64 bits in length, longer if registers or packages are cascaded. Below is a single stage shift register receiving data which is not synchronized to the register clock. The "data in" at the D pin of the type D FF (Flip-Flop) does not change levels when the clock changes for low to high. We may want to synchronize the data to a system wide clock in a circuit board to improve the reliability of a digital logic circuit. The obvious point (as compared to the figure below) illustrated above is that whatever "data in" is present at the D pin of a type D FF is transfered from D to output Q at clock time. Since our example shift register uses positive edge sensitive storage elements, the output Q follows the D input when the clock transitions from low to high as shown by the up arrows on the diagram above. There is no doubt what logic level is present at clock time because the data is stable well before and after the clock edge. This is seldom the case in multi-stage shift registers. But, this was an easy example to start with. We are only concerned with the positive, low to high, clock edge. The falling edge can be ignored. It is very easy to see Q follow D at clock time above. Compare this to the diagram below where the "data in" appears to change with the positive clock edge. Since "data in" appears to changes at clock time t1 above, what does the type D FF see at clock time? The short over simplified answer is that it sees the data that was present at D prior to the clock. That is what is transfered to Q at clock time t1. The correct waveform is QC. At t1 Q goes to a zero if it is not already zero. The D register does not see a one until time t2, at which time Q goes high. Since data, above, present at D is clocked to Q at clock time, and Q cannot change until the next clock time, the D FF delays data by one clock period, provided that the data is already synchronized to the clock. The QA waveform is the same as "data in" with a one clock period delay. A more detailed look at what the input of the type D Flip-Flop sees at clock time follows. Refer to the figure below. Since "data in" appears to changes at clock time (above), we need further information to determine what the D FF sees. If the "data in" is from another shift register stage, another same type D FF, we can draw some conclusions based on data sheet information. Manufacturers of digital logic make available information about their parts in data sheets, formerly only available in a collection called a data book. Data books are still available; though, the manufacturer's web site is the modern source. The following data was extracted from the CD4006b data sheet for operation at 5VDC, which serves as an example to illustrate timing. tS is the setup time, the time data must be present before clock time. In this case data must be present at D 100ns prior to the clock. Furthermore, the data must be held for hold time tH=60ns after clock time. These two conditions must be met to reliably clock data from D to Q of the Flip-Flop. There is no problem meeting the setup time of 60ns as the data at D has been there for the whole previous clock period if it comes from another shift register stage. For example, at a clock frequency of 1 Mhz, the clock period is 1000 µs, plenty of time. Data will actually be present for 1000µs prior to the clock, which is much greater than the minimum required tS of 60ns. The hold time tH=60ns is met because D connected to Q of another stage cannot change any faster than the propagation delay of the previous stage tP=200ns. Hold time is met as long as the propagation delay of the previous D FF is greater than the hold time. Data at D driven by another stage Q will not change any faster than 200ns for the CD4006b. To summarize, output Q follows input D at nearly clock time if Flip-Flops are cascaded into a multi-stage shift register. Three type D Flip-Flops are cascaded Q to D and the clocks paralleled to form a three stage shift register above. Type JK FFs cascaded Q to J, Q' to K with clocks in parallel to yield an alternate form of the shift register above. A serial-in/serial-out shift register has a clock input, a data input, and a data output from the last stage. In general, the other stage outputs are not available Otherwise, it would be a serial-in, parallel-out shift register.. The waveforms below are applicable to either one of the preceding two versions of the serial-in, serial-out shift register. The three pairs of arrows show that a three stage shift register temporarily stores 3-bits of data and delays it by three clock periods from input to output. At clock time t1 a "data in" of 0 is clocked from D to Q of all three stages. In particular, D of stage A sees a logic 0, which is clocked to QA where it remains until time t2. At clock time t2 a "data in" of 1 is clocked from D to QA. At stages B and C, a 0, fed from preceding stages is clocked to QB and QC. At clock time t3 a "data in" of 0 is clocked from D to QA. QA goes low and stays low for the remaining clocks due to "data in" being 0. QB goes high at t3 due to a 1 from the previous stage. QC is still low after t3 due to a low from the previous stage. QC finally goes high at clock t4 due to the high fed to D from the previous stage QB. All earlier stages have 0s shifted into them. And, after the next clock pulse at t5, all logic 1s will have been shifted out, replaced by 0s We will take a closer look at the following parts available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets follow the links. The following serial-in/ serial-out shift registers are 4000 series CMOS (Complementary Metal Oxide Semiconductor) family parts. As such, They will accept a VDD, positive power supply of 3-Volts to 15-Volts. The VSS pin is grounded. The maximum frequency of the shift clock, which varies with VDD, is a few megahertz. See the full data sheet for details. The 18-bit CD4006b consists of two stages of 4-bits and two more stages of 5-bits with a an output tap at 4-bits. Thus, the 5-bit stages could be used as 4-bit shift registers. To get a full 18-bit shift register the output of one shift register must be cascaded to the input of another and so on until all stages create a single shift register as shown below. A CD4031 64-bit serial-in/ serial-out shift register is shown below. A number of pins are not connected (nc). Both Q and Q' are available from the 64th stage, actually Q64 and Q'64. There is also a Q64 "delayed" from a half stage which is delayed by half a clock cycle. A major feature is a data selector which is at the data input to the shift register. The "mode control" selects between two inputs: data 1 and data 2. If "mode control" is high, data will be selected from "data 2" for input to the shift register. In the case of "mode control" being logic low, the "data 1" is selected. Examples of this are shown in the two figures below. The "data 2" above is wired to the Q64 output of the shift register. With "mode control" high, the Q64 output is routed back to the shifter data input D. Data will recirculate from output to input. The data will repeat every 64 clock pulses as shown above. The question that arises is how did this data pattern get into the shift register in the first place? With "mode control" low, the CD4031 "data 1" is selected for input to the shifter. The output, Q64, is not recirculated because the lower data selector gate is disabled. By disabled we mean that the logic low "mode select" inverted twice to a low at the lower NAND gate prevents it for passing any signal on the lower pin (data 2) to the gate output. Thus, it is disabled. A CD4517b dual 64-bit shift register is shown above. Note the taps at the 16th, 32nd, and 48th stages. That means that shift registers of those lengths can be configured from one of the 64-bit shifters. Of course, the 64-bit shifters may be cascaded to yield an 80-bit, 96-bit, 112-bit, or 128-bit shift register. The clock CLA and CLB need to be paralleled when cascading the two shifters. WEB and WEB are grounded for normal shifting operations. The data inputs to the shift registers A and B are DA and DB respectively. Suppose that we require a 16-bit shift register. Can this be configured with the CD4517b? How about a 64-shift register from the same part? Above we show A CD4517b wired as a 16-bit shift register for section B. The clock for section B is CLB. The data is clocked in at CLB. And the data delayed by 16-clocks is picked of off Q16B. WEB , the write enable, is grounded. Above we also show the same CD4517b wired as a 64-bit shift register for the independent section A. The clock for section A is CLA. The data enters at CLA. The data delayed by 64-clock pulses is picked up from Q64A. WEA, the write enable for section A, is grounded. Parallel-in/ serial-out shift registers do everything that the previous serial-in/ serial-out shift registers do plus input data to all stages simultaneously. The parallel-in/ serial-out shift register stores data, shifts it on a clock by clock basis, and delays it by the number of stages times the clock period. In addition, parallel-in/ serial-out really means that we can load data in parallel into all stages before any shifting ever begins. This is a way to convert data from a parallel format to a serial format. By parallel format we mean that the data bits are present simultaneously on individual wires, one for each data bit as shown below. By serial format we mean that the data bits are presented sequentially in time on a single wire or circuit as in the case of the "data out" on the block diagram below. Below we take a close look at the internal details of a 3-stage parallel-in/ serial-out shift register. A stage consists of a type D Flip-Flop for storage, and an AND-OR selector to determine whether data will load in parallel, or shift stored data to the right. In general, these elements will be replicated for the number of stages required. We show three stages due to space limitations. Four, eight or sixteen bits is normal for real parts. Above we show the parallel load path when SHIFT/LD' is logic low. The upper NAND gates serving DA DB DC are enabled, passing data to the D inputs of type D Flip-Flops QA QB DC respectively. At the next positive going clock edge, the data will be clocked from D to Q of the three FFs. Three bits of data will load into QA QB DC at the same time. The type of parallel load just described, where the data loads on a clock pulse is known as synchronous load because the loading of data is synchronized to the clock. This needs to be differentiated from asynchronous load where loading is controlled by the preset and clear pins of the Flip-Flops which does not require the clock. Only one of these load methods is used within an individual device, the synchronous load being more common in newer devices. The shift path is shown above when SHIFT/LD' is logic high. The lower AND gates of the pairs feeding the OR gate are enabled giving us a shift register connection of SI to DA , QA to DB , QB to DC , QC to SO. Clock pulses will cause data to be right shifted out to SO on successive pulses. The waveforms below show both parallel loading of three bits of data and serial shifting of this data. Parallel data at DA DB DC is converted to serial data at SO. What we previously described with words for parallel loading and shifting is now set down as waveforms above. As an example we present 101 to the parallel inputs DAA DBB DCC. Next, the SHIFT/LD' goes low enabling loading of data as opposed to shifting of data. It needs to be low a short time before and after the clock pulse due to setup and hold requirements. It is considerably wider than it has to be. Though, with synchronous logic it is convenient to make it wide. We could have made the active low SHIFT/LD' almost two clocks wide, low almost a clock before t1 and back high just before t3. The important factor is that it needs to be low around clock time t1 to enable parallel loading of the data by the clock. Note that at t1 the data 101 at DA DB DC is clocked from D to Q of the Flip-Flops as shown at QA QB QC at time t1. This is the parallel loading of the data synchronous with the clock. Now that the data is loaded, we may shift it provided that SHIFT/LD' is high to enable shifting, which it is prior to t2. At t2 the data 0 at QC is shifted out of SO which is the same as the QC waveform. It is either shifted into another integrated circuit, or lost if there is nothing connected to SO. The data at QB, a 0 is shifted to QC. The 1 at QA is shifted into QB. With "data in" a 0, QA becomes 0. After t2, QA QB QC = 010. After t3, QA QB QC = 001. This 1, which was originally present at QA after t1, is now present at SO and QC. The last data bit is shifted out to an external integrated circuit if it exists. After t4 all data from the parallel load is gone. At clock t5 we show the shifting in of a data 1 present on the SI, serial input. Why provide SI and SO pins on a shift register? These connections allow us to cascade shift register stages to provide large shifters than available in a single IC (Integrated Circuit) package. They also allow serial connections to and from other ICs like microprocessors. Let's take a closer look at parallel-in/ serial-out shift registers available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets follow these the links. The SN74ALS166 shown above is the closest match of an actual part to the previous parallel-in/ serial out shifter figures. Let us note the minor changes to our figure above. First of all, there are 8-stages. We only show three. All 8-stages are shown on the data sheet available at the link above. The manufacturer labels the data inputs A, B, C, and so on to H. The SHIFT/LOAD control is called SH/LD'. It is abbreviated from our previous terminology, but works the same: parallel load if low, shift if high. The shift input (serial data in) is SER on the ALS166 instead of SI. The clock CLK is controlled by an inhibit signal, CLKINH. If CLKINH is high, the clock is inhibited, or disabled. Otherwise, this "real part" is the same as what we have looked at in detail. Above is the ANSI (American National Standards Institute) symbol for the SN74ALS166 as provided on the data sheet. Once we know how the part operates, it is convenient to hide the details within a symbol. There are many general forms of symbols. The advantage of the ANSI symbol is that the labels provide hints about how the part operates. The large notched block at the top of the '74ASL166 is the control section of the ANSI symbol. There is a reset indicted by R. There are three control signals: M1 (Shift), M2 (Load), and C3/1 (arrow) (inhibited clock). The clock has two functions. First, C3 for shifting parallel data wherever a prefix of 3 appears. Second, whenever M1 is asserted, as indicated by the 1 of C3/1 (arrow), the data is shifted as indicated by the right pointing arrow. The slash (/) is a separator between these two functions. The 8-shift stages, as indicated by title SRG8, are identified by the external inputs A, B, C, to H. The internal 2, 3D indicates that data, D, is controlled by M2 [Load] and C3 clock. In this case, we can conclude that the parallel data is loaded synchronously with the clock C3. The upper stage at A is a wider block than the others to accommodate the input SER. The legend 1, 3D implies that SER is controlled by M1 [Shift] and C3 clock. Thus, we expect to clock in data at SER when shifting as opposed to parallel loading. The ANSI/IEEE basic gate rectangular symbols are provided above for comparison to the more familiar shape symbols so that we may decipher the meaning of the symbology associated with the CLKINH and CLK pins on the previous ANSI SN74ALS166 symbol. The CLK and CLKINH feed an OR gate on the SN74ALS166 ANSI symbol. OR is indicated by => on the rectangular inset symbol. The long triangle at the output indicates a clock. If there was a bubble with the arrow this would have indicated shift on negative clock edge (high to low). Since there is no bubble with the clock arrow, the register shifts on the positive (low to high transition) clock edge. The long arrow, after the legend C3/1 pointing right indicates shift right, which is down the symbol. Part of the internal logic of the SN74ALS165 parallel-in/ serial-out, asynchronous load shift register is reproduced from the data sheet above. See the link at the beginning of this section the for the full diagram. We have not looked at asynchronous loading of data up to this point. First of all, the loading is accomplished by application of appropriate signals to the Set (preset) and Reset (clear) inputs of the Flip-Flops. The upper NAND gates feed the Set pins of the FFs and also cascades into the lower NAND gate feeding the Reset pins of the FFs. The lower NAND gate inverts the signal in going from the Set pin to the Reset pin. First, SH/LD' must be pulled Low to enable the upper and lower NAND gates. If SH/LD' were at a logic high instead, the inverter feeding a logic low to all NAND gates would force a High out, releasing the "active low" Set and Reset pins of all FFs. There would be no possibility of loading the FFs. With SH/LD' held Low, we can feed, for example, a data 1 to parallel input A, which inverts to a zero at the upper NAND gate output, setting FF QA to a 1. The 0 at the Set pin is fed to the lower NAND gate where it is inverted to a 1 , releasing the Reset pin of QA. Thus, a data A=1 sets QA=1. Since none of this required the clock, the loading is asynchronous with respect to the clock. We use an asynchronous loading shift register if we cannot wait for a clock to parallel load data, or if it is inconvenient to generate a single clock pulse. The only difference in feeding a data 0 to parallel input A is that it inverts to a 1 out of the upper gate releasing Set. This 1 at Set is inverted to a 0 at the lower gate, pulling Reset to a Low, which resets QA=0. The ANSI symbol for the SN74ALS166 above has two internal controls C1 [LOAD] and C2 clock from the OR function of (CLKINH, CLK). SRG8 says 8-stage shifter. The arrow after C2 indicates shifting right or down. SER input is a function of the clock as indicated by internal label 2D. The parallel data inputs A, B, C to H are a function of C1 [LOAD], indicated by internal label 1D. C1 is asserted when sh/LD' =0 due to the half-arrow inverter at the input. Compare this to the control of the parallel data inputs by the clock of the previous synchronous ANSI SN75ALS166. Note the differences in the ANSI Data labels. On the CD4014B above, M1 is asserted when LD/SH'=0. M2 is asserted when LD/SH'=1. Clock C3/1 is used for parallel loading data at 2, 3D when M2 is active as indicated by the 2,3 prefix labels. Pins P3 to P7 are understood to have the smae internal 2,3 prefix labels as P2 and P8. At SER, the 1,3D prefix implies that M1 and clock C3 are necessary to input serial data. Right shifting takes place when M1 active is as indicated by the 1 in C3/1 arrow. The CD4021B is a similar part except for asynchronous parallel loading of data as implied by the lack of any 2 prefix in the data label 1D for pins P1, P2, to P8. Of course, prefix 2 in label 2D at input SER says that data is clocked into this pin. The OR gate inset shows that the clock is controlled by LD/SH'. The above SN74LS674 internal label SRG 16 indicates 16-bit shift register. The MODE input to the control section at the top of the symbol is labeled 1,2 M3. Internal M3 is a function of input MODE and G1 and G2 as indicated by the 1,2 preceding M3. The base label G indicates an AND function of any such G inputs. Input R/W' is internally labeled G1/2 EN. This is an enable EN (controlled by G1 AND G2) for tristate devices used elsewhere in the symbol. We note that CS' on (pin 1) is internal G2. Chip select CS' also is ANDed with the input CLK to give internal clock C4. The bubble within the clock arrow indicates that activity is on the negative (high to low transition) clock edge. The slash (/) is a separator implying two functions for the clock. Before the slash, C4 indicates control of anything with a prefix of 4. After the slash, the 3' (arrow) indicates shifting. The 3' of C4/3' implies shifting when M3 is de-asserted (MODE=0). The long arrow indicates shift right (down). Moving down below the control section to the data section, we have external inputs P0-P15, pins (7-11, 13-23). The prefix 3,4 of internal label 3,4D indicates that M3 and the clock C4 control loading of parallel data. The D stands for Data. This label is assumed to apply to all the parallel inputs, though not explicitly written out. Locate the label 3',4D on the right of the P0 (pin7) stage. The complemented-3 indicates that M3=MODE=0 inputs (shifts) SER/Q15 (pin5) at clock time, (4 of 3',4D) corresponding to clock C4. In other words, with MODE=0, we shift data into Q0 from the serial input (pin 6). All other stages shift right (down) at clock time. Moving to the bottom of the symbol, the triangle pointing right indicates a buffer between Q and the output pin. The Triangle pointing down indicates a tri-state device. We previously stated that the tristate is controlled by enable EN, which is actually G1 AND G2 from the control section. If R/W=0, the tri-state is disabled, and we can shift data into Q0 via SER (pin 6), a detail we omitted above. We actually need MODE=0, R/W'=0, CS'=0 The internal logic of the SN74LS674 and a table summarizing the operation of the control signals is available in the link in the bullet list, top of section. If R/W'=1, the tristate is enabled, Q15 shifts out SER/Q15 (pin 6) and recirculates to the Q0 stage via the right hand wire to 3',4D. We have assumed that CS' was low giving us clock C4/3' and G2 to ENable the tri-state. An application of a parallel-in/ serial-out shift register is to read data into a microprocessor. The Alarm above is controlled by a remote keypad. The alarm box supplies +5V and ground to the remote keypad to power it. The alarm reads the remote keypad every few tens of milliseconds by sending shift clocks to the keypad which returns serial data showing the status of the keys via a parallel-in/ serial-out shift register. Thus, we read nine key switches with four wires. How many wires would be required if we had to run a circuit for each of the nine keys? A practical application of a parallel-in/ serial-out shift register is to read many switch closures into a microprocessor on just a few pins. Some low end microprocessors only have 6-I/O (Input/Output) pins available on an 8-pin package. Or, we may have used most of the pins on an 84-pin package. We may want to reduce the number of wires running around a circuit board, machine, vehicle, or building. This will increase the reliability of our system. It has been reported that manufacturers who have reduced the number of wires in an automobile produce a more reliable product. In any event, only three microprocessor pins are required to read in 8-bits of data from the switches in the figure above. We have chosen an asynchronous loading device, the CD4021B because it is easier to control the loading of data without having to generate a single parallel load clock. The parallel data inputs of the shift register are pulled up to +5V with a resistor on each input. If all switches are open, all 1s will be loaded into the shift register when the microprocessor moves the LD/SH' line from low to high, then back low in anticipation of shifting. Any switch closures will apply logic 0s to the corresponding parallel inputs. The data pattern at P1-P7 will be parallel loaded by the LD/SH'=1 generated by the microprocessor software. The microprocessor generates shift pulses and reads a data bit for each of the 8-bits. This process may be performed totally with software, or larger microprocessors may have one or more serial interfaces to do the task more quickly with hardware. With LD/SH'=0, the microprocessor generates a 0 to 1 transition on the Shift clock line, then reads a data bit on the Serial data in line. This is repeated for all 8-bits. The SER line of the shift register may be driven by another identical CD4021B circuit if more switch contacts need to be read. In which case, the microprocessor generates 16-shift pulses. More likely, it will be driven by something else compatible with this serial data format, for example, an analog to digital converter, a temperature sensor, a keyboard scanner, a serial read-only memory. As for the switch closures, they may be limit switches on the carriage of a machine, an over-temperature sensor, a magnetic reed switch, a door or window switch, an air or water pressure switch, or a solid state optical interrupter. A serial-in/parallel-out shift register is similar to the serial-in/ serial-out shift register in that it shifts data into internal storage elements and shifts data out at the serial-out, data-out, pin. It is different in that it makes all the internal stages available as outputs. Therefore, a serial-in/parallel-out shift register converts data from serial format to parallel format. If four data bits are shifted in by four clock pulses via a single wire at data-in, below, the data becomes available simultaneously on the four Outputs QA to QD after the fourth clock pulse. The practical application of the serial-in/parallel-out shift register is to convert data from serial format on a single wire to parallel format on multiple wires. Perhaps, we will illuminate four LEDs (Light Emitting Diodes) with the four outputs (QA QB QC QD ). The above details of the serial-in/parallel-out shift register are fairly simple. It looks like a serial-in/ serial-out shift register with taps added to each stage output. Serial data shifts in at SI (Serial Input). After a number of clocks equal to the number of stages, the first data bit in appears at SO (QD) in the above figure. In general, there is no SO pin. The last stage (QD above) serves as SO and is cascaded to the next package if it exists. If a serial-in/parallel-out shift register is so similar to a serial-in/ serial-out shift register, why do manufacturers bother to offer both types? Why not just offer the serial-in/parallel-out shift register? They actually only offer the serial-in/parallel-out shift register, as long as it has no more than 8-bits. Note that serial-in/ serial-out shift registers come in bigger than 8-bit lengths of 18 to to 64-bits. It is not practical to offer a 64-bit serial-in/parallel-out shift register requiring that many output pins. See waveforms below for above shift register. The shift register has been cleared prior to any data by CLR', an active low signal, which clears all type D Flip-Flops within the shift register. Note the serial data 1011 pattern presented at the SI input. This data is synchronized with the clock CLK. This would be the case if it is being shifted in from something like another shift register, for example, a parallel-in/ serial-out shift register (not shown here). On the first clock at t1, the data 1 at SI is shifted from D to Q of the first shift register stage. After t2 this first data bit is at QB. After t3 it is at QC. After t4 it is at QD. Four clock pulses have shifted the first data bit all the way to the last stage QD. The second data bit a 0 is at QC after the 4th clock. The third data bit a 1 is at QB. The fourth data bit another 1 is at QA. Thus, the serial data input pattern 1011 is contained in (QD QC QB QA). It is now available on the four outputs. It will available on the four outputs from just after clock t4 to just before t5. This parallel data must be used or stored between these two times, or it will be lost due to shifting out the QD stage on following clocks t5 to t8 as shown above. Let's take a closer look at Serial-in/ parallel-out shift registers available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets follow the links. The 74ALS164A is almost identical to our prior diagram with the exception of the two serial inputs A and B. The unused input should be pulled high to enable the other input. We do not show all the stages above. However, all the outputs are shown on the ANSI symbol below, along with the pin numbers. The CLK input to the control section of the above ANSI symbol has two internal functions C1, control of anything with a prefix of 1. This would be clocking in of data at 1D. The second function, the arrow after after the slash (/) is right (down) shifting of data within the shift register. The eight outputs are available to the right of the eight registers below the control section. The first stage is wider than the others to accommodate the A&B input. The above internal logic diagram is adapted from the TI (Texas Instruments) data sheet for the 74AHC594. The type "D" FFs in the top row comprise a serial-in/ parallel-out shift register. This section works like the previously described devices. The outputs (QA' QB' to QH' ) of the shift register half of the device feed the type "D" FFs in the lower half in parallel. QH' (pin 9) is shifted out to any optional cascaded device package. A single positive clock edge at RCLK will transfer the data from D to Q of the lower FFs. All 8-bits transfer in parallel to the output register (a collection of storage elements). The purpose of the output register is to maintain a constant data output while new data is being shifted into the upper shift register section. This is necessary if the outputs drive relays, valves, motors, solenoids, horns, or buzzers. This feature may not be necessary when driving LEDs as long as flicker during shifting is not a problem. Note that the 74AHC594 has separate clocks for the shift register (SRCLK) and the output register ( RCLK). Also, the shifter may be cleared by SRCLR and, the output register by RCLR. It desirable to put the outputs in a known state at power-on, in particular, if driving relays, motors, etc. The waveforms below illustrate shifting and latching of data. The above waveforms show shifting of 4-bits of data into the first four stages of 74AHC594, then the parallel transfer to the output register. In actual fact, the 74AHC594 is an 8-bit shift register, and it would take 8-clocks to shift in 8-bits of data, which would be the normal mode of operation. However, the 4-bits we show saves space and adequately illustrates the operation. We clear the shift register half a clock prior to t0 with SRCLR'=0. SRCLR' must be released back high prior to shifting. Just prior to t0 the output register is cleared by RCLR'=0. It, too, is released ( RCLR'=1). Serial data 1011 is presented at the SI pin between clocks t0 and t4. It is shifted in by clocks t1 t2 t3 t4 appearing at internal shift stages QA' QB' QC' QD' . This data is present at these stages between t4 and t5. After t5 the desired data (1011) will be unavailable on these internal shifter stages. Between t4 and t5 we apply a positive going RCLK transferring data 1011 to register outputs QA QB QC QD . This data will be frozen here as more data (0s) shifts in during the succeeding SRCLKs (t5 to t8). There will not be a change in data here until another RCLK is applied. The 74AHC595 is identical to the '594 except that the RCLR' is replaced by an OE' enabling a tri-state buffer at the output of each of the eight output register bits. Though the output register cannot be cleared, the outputs may be disconnected by OE'=1. This would allow external pull-up or pull-down resistors to force any relay, solenoid, or valve drivers to a known state during a system power-up. Once the system is powered-up and, say, a microprocessor has shifted and latched data into the '595, the output enable could be asserted (OE'=0) to drive the relays, solenoids, and valves with valid data, but, not before that time. Above are the proposed ANSI symbols for these devices. C3 clocks data into the serial input (external SER) as indicate by the 3 prefix of 2,3D. The arrow after C3/ indicates shifting right (down) of the shift register, the 8-stages to the left of the '595symbol below the control section. The 2 prefix of 2,3D and 2D indicates that these stages can be reset by R2 (external SRCLR'). The 1 prefix of 1,4D on the '594 indicates that R1 (external RCLR') may reset the output register, which is to the right of the shift register section. The '595, which has an EN at external OE' cannot reset the output register. But, the EN enables tristate (inverted triangle) output buffers. The right pointing triangle of both the '594 and'595 indicates internal buffering. Both the '594 and'595 output registers are clocked by C4 as indicated by 4 of 1,4D and 4D respectively. The CD4094B is a 3 to 15VDC capable latching shift register alternative to the previous 74AHC594 devices. CLOCK, C1, shifts data in at SERIAL IN as implied by the 1 prefix of 1D. It is also the clock of the right shifting shift register (left half of the symbol body) as indicated by the /(right-arrow) of C1/(arrow) at the CLOCK input. STROBE, C2 is the clock for the 8-bit output register to the right of the symbol body. The 2 of 2D indicates that C2 is the clock for the output register. The inverted triangle in the output latch indicates that the output is tristated, being enabled by EN3. The 3 preceding the inverted triangle and the 3 of EN3 are often omitted, as any enable (EN) is understood to control the tristate outputs. QS and QS' are non-latched outputs of the shift register stage. QS could be cascaded to SERIAL IN of a succeeding device. A real-world application of the serial-in/ parallel-out shift register is to output data from a microprocessor to a remote panel indicator. Or, another remote output device which accepts serial format data. The figure "Alarm with remote key pad" is repeated here from the parallel-in/ serial-out section with the addition of the remote display. Thus, we can display, for example, the status of the alarm loops connected to the main alarm box. If the Alarm detects an open window, it can send serial data to the remote display to let us know. Both the keypad and the display would likely be contained within the same remote enclosure, separate from the main alarm box. However, we will only look at the display panel in this section. If the display were on the same board as the Alarm, we could just run eight wires to the eight LEDs along with two wires for power and ground. These eight wires are much less desirable on a long run to a remote panel. Using shift registers, we only need to run five wires- clock, serial data, a strobe, power, and ground. If the panel were just a few inches away from the main board, it might still be desirable to cut down on the number of wires in a connecting cable to improve reliability. Also, we sometimes use up most of the available pins on a microprocessor and need to use serial techniques to expand the number of outputs. Some integrated circuit output devices, such as Digital to Analog converters contain serial-in/ parallel-out shift registers to receive data from microprocessors. The techniques illustrated here are applicable to those parts. We have chosen the 74AHC594 serial-in/ parallel-out shift register with output register; though, it requires an extra pin, RCLK, to parallel load the shifted-in data to the output pins. This extra pin prevents the outputs from changing while data is shifting in. This is not much of a problem for LEDs. But, it would be a problem if driving relays, valves, motors, etc. Code executed within the microprocessor would start with 8-bits of data to be output. One bit would be output on the "Serial data out" pin, driving SER of the remote 74AHC594. Next, the microprocessor generates a low to high transition on "Shift clock", driving SRCLK of the '595 shift register. This positive clock shifts the data bit at SER from "D" to "Q" of the first shift register stage. This has no effect on the QA LED at this time because of the internal 8-bit output register between the shift register and the output pins (QA to QH). Finally, "Shift clock" is pulled back low by the microprocessor. This completes the shifting of one bit into the '595. The above procedure is repeated seven more times to complete the shifting of 8-bits of data from the microprocessor into the 74AHC594 serial-in/ parallel-out shift register. To transfer the 8-bits of data within the internal '595 shift register to the output requires that the microprocessor generate a low to high transition on RCLK, the output register clock. This applies new data to the LEDs. The RCLK needs to be pulled back low in anticipation of the next 8-bit transfer of data. The data present at the output of the '595 will remain until the process in the above two paragraphs is repeated for a new 8-bits of data. In particular, new data can be shifted into the '595 internal shift register without affecting the LEDs. The LEDs will only be updated with new data with the application of the RCLK rising edge. What if we need to drive more than eight LEDs? Simply cascade another 74AHC594 SER pin to the QH' of the existing shifter. Parallel the SRCLK and RCLK pins. The microprocessor would need to transfer 16-bits of data with 16-clocks before generating an RCLK feeding both devices. The discrete LED indicators, which we show, could be 7-segment LEDs. Though, there are LSI (Large Scale Integration) devices capable of driving several 7-segment digits. This device accepts data from a microprocessor in a serial format, driving more LED segments than it has pins by by multiplexing the LEDs. For example, see link below for MAX6955 The purpose of the parallel-in/ parallel-out shift register is to take in parallel data, shift it, then output it as shown below. A universal shift register is a do-everything device in addition to the parallel-in/ parallel-out function. Above we apply four bit of data to a parallel-in/ parallel-out shift register at DA DB DC DD. The mode control, which may be multiple inputs, controls parallel loading vs shifting. The mode control may also control the direction of shifting in some real devices. The data will be shifted one bit position for each clock pulse. The shifted data is available at the outputs QA QB QC QD . The "data in" and "data out" are provided for cascading of multiple stages. Though, above, we can only cascade data for right shifting. We could accommodate cascading of left-shift data by adding a pair of left pointing signals, "data in" and "data out", above. The internal details of a right shifting parallel-in/ parallel-out shift register are shown below. The tri-state buffers are not strictly necessary to the parallel-in/ parallel-out shift register, but are part of the real-world device shown below. The 74LS395 so closely matches our concept of a hypothetical right shifting parallel-in/ parallel-out shift register that we use an overly simplified version of the data sheet details above. See the link to the full data sheet more more details, later in this chapter. LD/SH' controls the AND-OR multiplexer at the data input to the FF's. If LD/SH'=1, the upper four AND gates are enabled allowing application of parallel inputs DA DB DC DD to the four FF data inputs. Note the inverter bubble at the clock input of the four FFs. This indicates that the 74LS395 clocks data on the negative going clock, which is the high to low transition. The four bits of data will be clocked in parallel from DA DB DC DD to QA QB QC QD at the next negative going clock. In this "real part", OC' must be low if the data needs to be available at the actual output pins as opposed to only on the internal FFs. The previously loaded data may be shifted right by one bit position if LD/SH'=0 for the succeeding negative going clock edges. Four clocks would shift the data entirely out of our 4-bit shift register. The data would be lost unless our device was cascaded from QD' to SER of another device. Above, a data pattern is presented to inputs DA DB DC DD. The pattern is loaded to QA QB QC QD . Then it is shifted one bit to the right. The incoming data is indicated by X, meaning the we do no know what it is. If the input (SER) were grounded, for example, we would know what data (0) was shifted in. Also shown, is right shifting by two positions, requiring two clocks. The above figure serves as a reference for the hardware involved in right shifting of data. It is too simple to even bother with this figure, except for comparison to more complex figures to follow. Right shifting of data is provided above for reference to the previous right shifter. If we need to shift left, the FFs need to be rewired. Compare to the previous right shifter. Also, SI and SO have been reversed. SI shifts to QC. QC shifts to QB. QB shifts to QA. QA leaves on the SO connection, where it could cascade to another shifter SI. This left shift sequence is backwards from the right shift sequence. Above we shift the same data pattern left by one bit. There is one problem with the "shift left" figure above. There is no market for it. Nobody manufactures a shift-left part. A "real device" which shifts one direction can be wired externally to shift the other direction. Or, should we say there is no left or right in the context of a device which shifts in only one direction. However, there is a market for a device which will shift left or right on command by a control line. Of course, left and right are valid in that context. What we have above is a hypothetical shift register capable of shifting either direction under the control of L'/R. It is setup with L'/R=1 to shift the normal direction, right. L'/R=1 enables the multiplexer AND gates labeled R. This allows data to follow the path illustrated by the arrows, when a clock is applied. The connection path is the same as the"too simple" "shift right" figure above. Data shifts in at SR, to QA, to QB, to QC, where it leaves at SR cascade. This pin could drive SR of another device to the right. What if we change L'/R to L'/R=0? With L'/R=0, the multiplexer AND gates labeled L are enabled, yielding a path, shown by the arrows, the same as the above "shift left" figure. Data shifts in at SL, to QC, to QB, to QA, where it leaves at SL cascade. This pin could drive SL of another device to the left. The prime virtue of the above two figures illustrating the "shift left/ right register" is simplicity. The operation of the left right control L'/R=0 is easy to follow. A commercial part needs the parallel data loading implied by the section title. This appears in the figure below. Now that we can shift both left and right via L'/R, let us add SH/LD', shift/ load, and the AND gates labeled "load" to provide for parallel loading of data from inputs DA DB DC. When SH/LD'=0, AND gates R and L are disabled, AND gates "load" are enabled to pass data DA DB DC to the FF data inputs. the next clock CLK will clock the data to QA QB QC. As long as the same data is present it will be re-loaded on succeeding clocks. However, data present for only one clock will be lost from the outputs when it is no longer present on the data inputs. One solution is to load the data on one clock, then proceed to shift on the next four clocks. This problem is remedied in the 74ALS299 by the addition of another AND gate to the multiplexer. If SH/LD' is changed to SH/LD'=1, the AND gates labeled "load" are disabled, allowing the left/ right control L'/R to set the direction of shift on the L or R AND gates. Shifting is as in the previous figures. The only thing needed to produce a viable integrated device is to add the fourth AND gate to the multiplexer as alluded for the 74ALS299. This is shown in the next section for that part. Let's take a closer look at Serial-in/ parallel-out shift registers available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets, follow the links. We have already looked at the internal details of the SN74LS395A, see above previous figure, 74LS395 parallel-in/ parallel-out shift register with tri-state output. Directly above is the ANSI symbol for the 74LS395. Why only 4-bits, as indicated by SRG4 above? Having both parallel inputs, and parallel outputs, in addition to control and power pins, does not allow for any more I/O (Input/Output) bits in a 16-pin DIP (Dual Inline Package). R indicates that the shift register stages are reset by input CLR' (active low- inverting half arrow at input) of the control section at the top of the symbol. OC', when low, (invert arrow again) will enable (EN4) the four tristate output buffers (QA QB QC QD ) in the data section. Load/shift' (LD/SH') at pin (7) corresponds to internals M1 (load) and M2 (shift). Look for prefixes of 1 and 2 in the rest of the symbol to ascertain what is controlled by these. The negative edge sensitive clock (indicated by the invert arrow at pin-10) C3/2has two functions. First, the 3 of C3/2 affects any input having a prefix of 3, say 2,3D or 1,3D in the data section. This would be parallel load at A, B, C, D attributed to M1 and C3 for 1,3D. Second, 2 of C3/2-right-arrow indicates data clocking wherever 2 appears in a prefix (2,3D at pin-2). Thus we have clocking of data at SER into QA with mode 2 . The right arrow after C3/2 accounts for shifting at internal shift register stages QA QB QC QD. The right pointing triangles indicate buffering; the inverted triangle indicates tri-state, controlled by the EN4. Note, all the 4s in the symbol associated with the EN are frequently omitted. Stages QB QC are understood to have the same attributes as QD. QD' cascades to the next package's SER to the right. The table above, condensed from the data '299 data sheet, summarizes the operation of the 74ALS299 universal shift/ storage register. Follow the '299 link above for full details. The Multiplexer gates R, L, load operate as in the previous "shift left/ right register" figures. The difference is that the mode inputs S1 and S0 select shift left, shift right, and load with mode set to S1 S0 = to 01, 10, and 11respectively as shown in the table, enabling multiplexer gates L, R, and load respectively. See table. A minor difference is the parallel load path from the tri-state outputs. Actually the tri-state buffers are (must be) disabled by S1 S0 = 11 to float the I/O bus for use as inputs. A bus is a collection of similar signals. The inputs are applied to A, B through H (same pins as QA, QB, through QH) and routed to the load gate in the multiplexers, and on the the D inputs of the FFs. Data is parallel load on a clock pulse. The one new multiplexer gate is the AND gate labeled hold, enabled by S1 S0 = 00. The hold gate enables a path from the Q output of the FF back to the hold gate, to the D input of the same FF. The result is that with mode S1 S0 = 00, the output is continuously re-loaded with each new clock pulse. Thus, data is held. This is summarized in the table. To read data from outputs QA, QB, through QH, the tri-state buffers must be enabled by OE2', OE1' =00 and mode =S1 S0 = 00, 01, or 10. That is, mode is anything except load. See second table. Right shift data from a package to the left, shifts in on the SR input. Any data shifted out to the right from stage QH cascades to the right via QH'. This output is unaffected by the tri-state buffers. The shift right sequence for S1 S0 = 10 is: SR > QA > QB > QC > QD > QE > QF > QG > QH (QH') Left shift data from a package to the right shifts in on the SL input. Any data shifted out to the left from stage QA cascades to the left via QA', also unaffected by the tri-state buffers. The shift left sequence for S1 S0 = 01 is: (QA') QA < QB < QC < QD < QE < QF < QG < QH (QSL') Shifting may take place with the tri-state buffers disabled by one of OE2' or OE1' = 1. Though, the register contents outputs will not be accessible. See table. The "clean" ANSI symbol for the SN74ALS299 parallel-in/ parallel-out 8-bit universal shift register with tri-state output is shown for reference above. The annotated version of the ANSI symbol is shown to clarify the terminology contained therein. Note that the ANSI mode (S0 S1) is reversed from the order (S1 S0) used in the previous table. That reverses the decimal mode numbers (1 & 2). In any event, we are in complete agreement with the official data sheet, copying this inconsistency. The Alarm with remote keypad block diagram is repeated below. Previously, we built the keypad reader and the remote display as separate units. Now we will combine both the keypad and display into a single unit using a universal shift register. Though separate in the diagram, the Keypad and Display are both contained within the same remote enclosure. We will parallel load the keyboard data into the shift register on a single clock pulse, then shift it out to the main alarm box. At the same time , we will shift LED data from the main alarm to the remote shift register to illuminate the LEDs. We will be simultaneously shifting keyboard data out and LED data into the shift register. Eight LEDs and current limiting resistors are connected to the eight I/O pins of the 74ALS299 universal shift register. The LEDS can only be driven during Mode 3 with S1=0 S0=0. The OE1' and OE2' tristate enables are grounded to permenantly enable the tristate outputs during modes 0, 1, 2. That will cause the LEDS to light (flicker) during shifting. If this were a problem the EN1' and EN2' could be ungrounded and paralleled with S1 and S0 respectively to only enable the tristate buffers and light the LEDS during hold, mode 3. Let's keep it simple for this example. During parallel loading, S0=1 inverted to a 0, enables the octal tristate buffers to ground the switch wipers. The upper, open, switch contacts are pulled up to logic high by the resister-LED combination at the eight inputs. Any switch closure will short the input low. We parallel load the switch data into the '299 at clock t0 when both S0 and S1 are high. See waveforms below. Once S0 goes low, eight clocks (t0 tot8) shift switch closure data out of the '299 via the Qh' pin. At the same time, new LED data is shifted in at SR of the 299 by the same eight clocks. The LED data replaces the switch closure data as shifting proceeds. After the 8th shift clock, t8, S1 goes low to yield hold mode (S1 S0 = 00). The data in the shift register remains the same even if there are more clocks, for example, T9, t10, etc. Where do the waveforms come from? They could be generated by a microprocessor if the clock rate were not over 100 kHz, in which case, it would be inconvenient to generate any clocks after t8. If the clock was in the megahertz range, the clock would run continuously. The clock, S1 and S0 would be generated by digital logic, not shown here. If the output of a shift register is fed back to the input. a ring counter results. The data pattern contained within the shift register will recirculate as long as clock pulses are applied. For example, the data pattern will repeat every four clock pulses in the figure below. However, we must load a data pattern. All 0's or all 1's doesn't count. Is a continuous logic level from such a condition useful? We make provisions for loading data into the parallel-in/ serial-out shift register configured as a ring counter below. Any random pattern may be loaded. The most generally useful pattern is a single 1. Loading binary 1000 into the ring counter, above, prior to shifting yields a viewable pattern. The data pattern for a single stage repeats every four clock pulses in our 4-stage example. The waveforms for all four stages look the same, except for the one clock time delay from one stage to the next. See figure below. The circuit above is a divide by 4 counter. Comparing the clock input to any one of the outputs, shows a frequency ratio of 4:1. How may stages would we need for a divide by 10 ring counter? Ten stages would recirculate the 1 every 10 clock pulses. An alternate method of initializing the ring counter to 1000 is shown above. The shift waveforms are identical to those above, repeating every fourth clock pulse. The requirement for initialization is a disadvantage of the ring counter over a conventional counter. At a minimum, it must be initialized at power-up since there is no way to predict what state flip-flops will power up in. In theory, initialization should never be required again. In actual practice, the flip-flops could eventually be corrupted by noise, destroying the data pattern. A "self correcting" counter, like a conventional synchronous binary counter would be more reliable. The above binary synchronous counter needs only two stages, but requires decoder gates. The ring counter had more stages, but was self decoding, saving the decode gates above. Another disadvantage of the ring counter is that it is not "self starting". If we need the decoded outputs, the ring counter looks attractive, in particular, if most of the logic is in a single shift register package. If not, the conventional binary counter is less complex without the decoder. The waveforms decoded from the synchronous binary counter are identical to the previous ring counter waveforms. The counter sequence is (QA QB) = (00 01 10 11). The switch-tail ring counter, also know as the Johnson counter, overcomes some of the limitations of the ring counter. Like a ring counter a Johnson counter is a shift register fed back on its' self. It requires half the stages of a comparable ring counter for a given division ratio. If the complement output of a ring counter is fed back to the input instead of the true output, a Johnson counter results. The difference between a ring counter and a Johnson counter is which output of the last stage is fed back (Q or Q'). Carefully compare the feedback connection below to the previous ring counter. This "reversed" feedback connection has a profound effect upon the behavior of the otherwise similar circuits. Recirculating a single 1 around a ring counter divides the input clock by a factor equal to the number of stages. Whereas, a Johnson counter divides by a factor equal to twice the number of stages. For example, a 4-stage ring counter divides by 4. A 4-stage Johnson counter divides by 8. Start a Johnson counter by clearing all stages to 0s before the first clock. This is often done at power-up time. Referring to the figure below, the first clock shifts three 0s from ( QA QB QC) to the right into ( QB QC QD). The 1 at QD' (the complement of Q) is shifted back into QA. Thus, we start shifting 1s to the right, replacing the 0s. Where a ring counter recirculated a single 1, the 4-stage Johnson counter recirculates four 0s then four 1s for an 8-bit pattern, then repeats. The above waveforms illustrates that multi-phase square waves are generated by a Johnson counter. The 4-stage unit above generates four overlapping phases of 50% duty cycle. How many stages would be required to generate a set of three phase waveforms? For example, a three stage Johnson counter, driven by a 360 Hertz clock would generate three 120o phased square waves at 60 Hertz. The outputs of the flop-flops in a Johnson counter are easy to decode to a single state. Below for example, the eight states of a 4-stage Johnson counter are decoded by no more than a two input gate for each of the states. In our example, eight of the two input gates decode the states for our example Johnson counter. No matter how long the Johnson counter, only 2-input decoder gates are needed. Note, we could have used uninverted inputs to the AND gates by changing the gate inputs from true to inverted at the FFs, Q to Q', (and vice versa). However, we are trying to make the diagram above match the data sheet for the CD4022B, as closely as practical. Above, our four phased square waves QA to QD are decoded to eight signals (G0 to G7) active during one clock period out of a complete 8-clock cycle. For example, G0 is active high when both QA and QD are low. Thus, pairs of the various register outputs define each of the eight states of our Johnson counter example. Above is the more complete internal diagram of the CD4022B Johnson counter. See the manufacturers' data sheet for minor details omitted. The major new addition to the diagram as compared to previous figures is the disallowed state detector composed of the two NOR gates. Take a look at the inset state table. There are 8-permissible states as listed in the table. Since our shifter has four flip-flops, there are a total of 16-states, of which there are 8-disallowed states. That would be the ones not listed in the table. In theory, we will not get into any of the disallowed states as long as the shift register is RESET before first use. However, in the "real world" after many days of continuous operation due to unforeseen noise, power line disturbances, near lightning strikes, etc, the Johnson counter could get into one of the disallowed states. For high reliability applications, we need to plan for this slim possibility. More serious is the case where the circuit is not cleared at power-up. In this case there is no way to know which of the 16-states the circuit will power up in. Once in a disallowed state, the Johnson counter will not return to any of the permissible states without intervention. That is the purpose of the NOR gates. Examine the table for the sequence (QA QB QC) = (010). Nowhere does this sequence appear in the table of allowed states. Therefore (010) is disallowed. It should never occur. If it does, the Johnson counter is in a disallowed state, which it needs to exit to any allowed state. Suppose that (QA QB QC) = (010). The second NOR gate will replace QB = 1 with a 0 at the D input to FF QC. In other words, the offending 010 is replaced by 000. And 000, which does appear in the table, will be shifted right. There are may triple-0 sequences in the table. This is how the NOR gates get the Johnson counter out of a disallowed state to an allowed state. Not all disallowed states contain a 010 sequence. However, after a few clocks, this sequence will appear so that any disallowed states will eventually be escaped. If the circuit is powered-up without a RESET, the outputs will be unpredictable for a few clocks until an allowed state is reached. If this is a problem for a particular application, be sure to RESET on power-up. A pair of integrated circuit Johnson counter devices with the output states decoded is available. We have already looked at the CD4017 internal logic in the discussion of Johnson counters. The 4000 series devices can operate from 3V to 15V power supplies. The the 74HC' part, designed for a TTL compatiblity, can operate from a 2V to 6V supply, count faster, and has greater output drive capability. For complete device data sheets, follow the links. CD4017 Johnson counter with 10 decoded outputs CD4022 Johnson counter with 8 decoded outputs The ANSI symbols for the modulo-10 (divide by 10) and modulo-8 Johnson counters are shown above. The symbol takes on the characteristics of a counter rather than a shift register derivative, which it is. Waveforms for the CD4022 modulo-8 and operation were shown previously. The CD4017B/ 74HC4017 decade counter is a 5-stage Johnson counter with ten decoded outputs. The operation and waveforms are similar to the CD4017. In fact, the CD4017 and CD4022 are both detailed on the same data sheet. See above links. The 74HC4017 is a more modern version of the decade counter. These devices are used where decoded outputs are needed instead of the binary or BCD (Binary Coded Decimal) outputs found on normal counters. By decoded, we mean one line out of the ten lines is active at a time for the '4017 in place of the four bit BCD code out of conventional counters. See previous waveforms for 1-of-8 decoding for the '4022 Octal Johnson counter. The above Johnson counter shifts a lighted LED each fifth of a second around the ring of ten. Note that the 74HC4017 is used instead of the '40017 because the former part has more current drive capability. From the data sheet, (at the link above) operating at VCC= 5V, the VOH= 4.6V at 4ma. In other words, the outputs can supply 4 ma at 4.6 V to drive the LEDs. Keep in mind that LEDs are normally driven with 10 to 20 ma of current. Though, they are visible down to 1 ma. This simple circuit illustrates an application of the 'HC4017. Need a bright display for an exhibit? Then, use inverting buffers to drive the cathodes of the LEDs pulled up to the power supply by lower value anode resistors. The 555 timer, serving as an astable multivibrator, generates a clock frequency determined by R1 R2 C1. This drives the 74HC4017 a step per clock as indicated by a single LED illuminated on the ring. Note, if the 555 does not reliably drive the clock pin of the '4015, run it through a single buffer stage between the 555 and the '4017. A variable R2 could change the step rate. The value of decoupling capacitor C2 is not critical. A similar capacitor should be applied across the power and ground pins of the '4017. The Johnson counter above generates 3-phase square waves, phased 60o apart with respect to (QA QB QC). However, we need 120o phased waveforms of power applications (see Volume II, AC). Choosing P1=QA P2=QC P3=QB' yields the 120o phasing desired. See figure below. If these (P1 P2 P3) are low-pass filtered to sine waves and amplified, this could be the beginnings of a 3-phase power supply. For example, do you need to drive a small 3-phase 400 Hz aircraft motor? Then, feed 6x 400Hz to the above circuit CLOCK. Note that all these waveforms are 50% duty cycle. The circuit below produces 3-phase nonoverlapping, less than 50% duty cycle, waveforms for driving 3-phase stepper motors. Above we decode the overlapping outputs QA QB QC to non-overlapping outputs P0 P1 P2 as shown below. These waveforms drive a 3-phase stepper motor after suitable amplification from the milliamp level to the fractional amp level using the ULN2003 drivers shown above, or the discrete component Darlington pair driver shown in the circuit which follow. Not counting the motor driver, this circuit requires three IC (Integrated Circuit) packages: two dual type "D" FF packages and a quad NAND gate. A single CD4017, above, generates the required 3-phase stepper waveforms in the circuit above by clearing the Johnson counter at count 3. Count 3 persists for less than a microsecond before it clears its' self. The other counts (Q0=G0 Q1=G1 Q2=G2) remain for a full clock period each. The Darlington bipolar transistor drivers shown above are a substitute for the internal circuitry of the ULN2003. The design of drivers is beyond the scope of this digital electronics chapter. Either driver may be used with either waveform generator circuit. The above waceforms make the most sense in the context of the internal logic of the CD4017 shown earlier in this section. Though, the AND gating equations for the internal decoder are shown. The signals QA QB QC are Johnson counter direct shift register outputs not available on pin-outs. The QD waveform shows resetting of the '4017 every three clocks. Q0 Q1 Q2, etc. are decoded outputs which actually are available at output pins. Above we generate waveforms for driving a unipolar stepper motor, which only requires one polarity of driving signal. That is, we do not have to reverse the polarity of the drive to the windings. This simplifies the power driver between the '4017 and the motor. Darlington pairs from a prior diagram may be substituted for the ULN3003. Once again, the CD4017B generates the required waveforms with a reset after the teminal count. The decoded outputs Q0 Q1 Q2 Q3 sucessively drive the stepper motor windings, with Q4 reseting the counter at the end of each group of four pulses. http://www.st.com/stonline/psearch/index.htm select standard logicshttp://www.st.com/stonline/books/pdf/docs/2069.pdf http://www.ti.com/ (Products, Logic, Product Tree) Lessons In Electric Circuits copyright (C) 2000-2013 Tony R. Kuphaldt, under the terms and conditions of the Design Science License.
http://www.ibiblio.org/kuphaldt/electricCircuits/Digital/DIGI_12.html
13
51
Variety of angle worksheets covers almost all aspects of angles in geometry. Topics in angles worksheets listed here to understand what we have. Basic Angles : Simple free worksheets to learn vertex, sides, position of a point, naming angles. Measuring Angles : It includes protractor worksheets, measuring angles and identifying the types. Complementary and Supplementary : Needless to say, includes worksheets in complementary and supplementary angles along with mixed review. Intersecting Lines : Contain worksheets based on linear pair, vertical angle, angles in a straight line, angles around a point and few combo. Angles in Algebra : Must see worksheets. Use the property of linear pair, vertically opposite angles, straight angles, angles at a point to solve algebraic expressions. Parallel Lines and Transversal : It includes separate worksheets for corresponding angles, alternate and consecutive angles with few combo. Basic Angle Worksheets Identify the Vertex and Sides When two rays have a common end point, an angle is formed. The end point is called the vertex and the rays that form an angle are called sides. Use these free printable worksheets to identify the vertex and the sides of the angles. Interior, Exterior or On the Angle A point is plotted for each angle formed by two rays. Loot at the position of a point and tell whether it is interior, exterior or on the angle. Name the Angles Name the angle in all possible ways. Different methods are: vertex alone; vertex and sides; angle alone. Place the center position and straight edge of a protractor over a vertex and the base in each figure and measure the angle. Angles with the vertex and sides not mentioned. Angles with the vertex and sides mentioned. Identifying Types of Angles Use the angle measure to identify whether the angles are acute, obtuse, right, reflex, zero or complete rotation. Complementary and Supplementary Angles Two angles are said to be complement, if their sum equals 90 degree. Use the definition of complementary angle to solve the worksheets that follow. Simple worksheet with list of angles. Calculate complement for each angle provided. Right angle is divided into two with the measure of one angle. Identify the measure of missing angle. Map the angles that are complement to each other. If the sum of two angles equal 180 degree, they are called supplementary angles. Simple problems in finding supplement of each angle. Straight angle is divided into two with the measure of one angle. Find the measure of missing angle. Map the supplementary angles in correct order. Mixed review includes both complementary and supplementary angles Angles Formed by Intersecting Lines When a ray stands on a straight line, it forms a linear pair. Linear pair is supplement to each other. If any two lines intersect, at a point of intersection, it forms four angles. The angles that are vertically opposite usually called as vertical angles and they are congruent. Linear Pair and Vertical Angles- Combo Each figure includes only one angle. Find the rest of 3 angles include both linear pair and vertical angles. Angles in a Straight Line An angle in a straight line is 180o. If one or more ray stands on a line, it forms many angles at the meeting point whose sum measures 180o. Angles Around a Point Sum of the angles around a point is 360o. Find the missing angles around a point in each figure that follows in a worksheet. Angles in Algebra Linear Pair Equation Set up the sum of linear pair equal to 180o and solve for the variable. Equation in Vertical Angle Because the vertical angles have same degree measure, they are congruent. Set up the algebraic expression equal to the angle and solve for x. Algebraic Equation in a Straight Line Set up the sum of the angles equal to 180 degree and find the value of the variable. Parallel lines and Transversal Worksheets When a transversal intersects two parallel lines, it forms corresponding angles, linear pair, alternate angles, vertical angles and consecutive angles. The next coming angle worksheets help you practice in identifying different type of angles formed by parallel lines and its transversal. Look at the corresponding position of angles formed by a transversal and find the missing angle. The angles that are formed on opposite sides of the transversal called as alternate angles. Based on position, it is further classified as alternate interior and alternate exterior. Consecutive Interior Angles Consecutive interior angles are on the same side of the transversal but between the parallel lines. They are supplement to each other. Consecutive Exterior Angles Same as consecutive interior but lies outside two lines. Identify all possible angles formed by parallel lines and its transversal.
http://www.mathworksheets4kids.com/angles.html
13
67
Lesson Plan in Japan (Example) Triangle Congruence Condition Shared by: zic15018 Lesson Plan in Japan (Example) Triangle Congruence Condition Making a Triangle by Straws with a Wire Inside Overall Meaning of “Diagrammatic Congruence” A student deepens to understand the concept of congruence through finding congruent diagrams and constructing figures by himself/herself. The student also learns the basic deductive and inductive reasoning of conditions for realizing congruence. No. of Unit Design Lesson Discovery and Classification by moving figures and 1 turning over figures (mainly from triangles to rectangles) Corresponding sides and angles Congruent Diagrams (including practice of angle measurement) 2 Congruence in polygons angles (including practice of angle measurement) Triangle Congruence Making a triangle by straws with a wire 3 Condition Expressing conditions of forming a triangle by sentences 4 (Rectangle Congruence Students try to find rectangle congruence condition by * Condition) themselves Drawing Congruence Expressing triangle congruence condition by drawing 5 First Experience of Proof Proving angle bisector by applying triangle congruence 6 Purpose of today’s lesson A student can verify congruence by putting one triangle on top of one another. Mathematical A student can define a triangle by movements of changing Operational and triangle’s sides and angles. Expressive Literacy (A student can express triangle congruence condition by sentences. → shifting the lesson to the purpose of the next lesson) A student can think reasons of trilateral joint condition Mathematical Logical A student can inductively think congruence condition through Literacy changing shapes of a triangle. Mathematical Creative A student can examine other cases by freely moving sides of a Literacy triangle. Students help each other for their weak points when they work together. Mathematical Activity A student can monitor the other student’s thought. (A student say Literacy wards to the next student like “if it is like this, what do you think?” or “how about this way?”) Preparation of today’s lesson Basically student’s activities can be done in a group of 4 students (2 students/group can also be considered) Preparing straws, wires and other tools: strings, rubber bands, angles made by cardboard, scissors, and cellulose tapes (See Appendix 1). Anticipated students’ What to support, what to Activity Flow Reactions evaluate Time A teacher explains about the activity in the lesson. The teacher says, “Today, 0-2 you will make various kinds of triangles by using straws and wires. Then, you min will examine whether these triangle are congruent or not.” Step 1: The teacher Some students cannot Group members can help 2-15 distributes 3 straws and a make it and get each other if the activity min string to each student (one confused. is simple work. set/student) Some students can verify [Activity] The teacher says, “Let the congruence, but some The teacher makes string through the 3 straws. other students cannot do students remind that Then, make a triangle by that. congruence can be connecting the end to end verified by putting one of string. Is your triangle is on top of the other or congruent to your next checking corresponding person’s triangle?” (They sides and angles. should be congruent.) [Operation] The teacher says, “Please Students somehow try to The teacher facilitates 15-23 untie the string. Then, make different triangles. students to ask questions min replace the 3 straws, put the Some students can to the students who string through the 3 straws realize that it is realized that it is and connect the string impossible to do that. impossible to do that. again. Can you make any [Logic and Activity] triangles which are different from your next person’s triangle?” (They cannot do it) Step 2: The teacher says Some students do not The teacher make a 23-33 “This time, you are making instantly understand the student confirm the min a triangle by straws with a teacher’s instruction. teacher’s instruction by wire. Please put the white Some students put red discussing it with the side on the line in Figure 1 and black sides together next student or the of the work sheet. Next, put instead of putting the red group. [Activity] the red side along with side along with the A student can realize angle on the Figure 1. angle. when he/she confirms it Then, put the black side Some students can with the next student. touching the edge of the red quickly complete the [Operation] side. Fix the edges of black work without difficulties. The teacher let students and red straws by cellulose think about other angles tape. After this, please in the triangle. verify congruence with [Creativity] your next person.” Step3: The teacher says Some students cannot The teacher let a student 33-40 “Next, you will put the understand to cross the work with the next min white side on the line in red side and the black student together. Figure 2 of the work sheet. side. [Activity] Then put the red side and Some students can The teacher makes black side along with the quickly complete the students think other angles on Figure 2. What work without difficulties. angles of the red and the shape of triangle is it? black side. Please compare your The teacher asks “Are triangle to your next there angles which make person’s triangle.” the edge of red side and the edge of the black side just touching together?” [Operation] [Logic] [Creativity] The teacher says, “Please Some students cannot The teacher informs 40-50 write your discovery of write down. students that it is very min rules or orders. You may Some students cannot fine to touch and operate discuss it with your group members when you write discuss. the triangle again. down.” Some students can write The teacher makes much and discuss well. students express their though on the teaching materials. [Operation] [Creativity] Appendix 1 Activity Step 1 Result of work Step 2 Figure 1 Step 3 Figure 2 Appendix 2 Distinguishing Characteristics of this Unit 1.1. What is the meaning of learning triangle congruence condition in the junior secondary education? Starting Point of Plane (two dimensions) geometry: Characters of triangle learnt in this unit is the basis for study of all plane geometry. Introduction to Logic: This unit is the entrance of the world of mathematical proof, that is to say “Hypothesis and Conclusion”. Mastering Basis of Drawing Figures: Students grow accustomed to using rulers and a pair of compasses. 1.2. Students’ difficulties Students may face many difficulties because this unit has the three elements mentioned above. Even each element can be difficult for many students. Students have difficulties on the words of “Sides” and “Angles” when they explain and listen to. Students have difficulties to imagine that the triangle congruence condition applies any kinds of triangles. Many students cannot handle rulers and a pair of compasses well.
http://www.docstoc.com/docs/22979968/Lesson-Plan-in-Japan-(Example)-Triangle-Congruence-Condition
13
79
See also the Dr. Math FAQ: Browse High School Constructions Stars indicate particularly interesting answers or good places to begin browsing. - Drawing An Ellipse [11/24/1997] How do you draw an ellipse with only a straight edge and a compass? - Drawing Diagrams [08/02/1998] I'm having trouble drawing a good geometry diagram. - Drawing or Constructing an Ellipse or Oval [02/22/2006] I know you can draw an ellipse using a string and two tacks. How do I determine the length of the string and the location of the tacks to draw an ellipse of a particular size? - Find the Center of a Circle Using Compass and Straightedge [10/15/2003] How can I find the center of a circle? - Folding a Circle to Get an Ellipse [01/08/2001] How can I prove that taking a point on a circle, folding it to an interior point, and repeating this process creates an envelope of folds that forms an ellipse? - How Did Socrates Teach the Boy to Double the Area of a Square? [06/15/2010] Reading Plato's Meno leaves a student confused about how the ancient Greeks scaled squares. Doctor Rick walks through this story of Socrates and his method, emphasizing that they would have approached this puzzle -- as well as the Pythagorean Theorem -- geometrically. - The Importance of Geometry Constructions [12/29/1998] Why are geometry constructions important? What do we learn from them? Where have they appeared in math history? - Impossibility of Constructing a Regular Nine-Sided Polygon [04/07/1998] Can you construct a regular 9 sided polygon with just a compass and - Impossible Constructions [01/14/1998] What are the three ancient impossible construction problems of Euclidean - Impossible Constructions? [04/08/1997] My geometry teacher told us there are 3 impossible problems or constructions - what are they? - Inconstructible Regular Polygon [02/22/2002] I've been trying to find a proof that a regular polygon with n sides is inconstructible if n is not a Fermat prime number. - Inscribing a Regular Pentagon within a Circle [04/15/1999] What are the reasons for the steps in inscribing a regular pentagon within a circle with only the help of a compass and a straightedge? - Inscribing a Square in a Triangle [10/13/2000] How do you inscribe a square in a scalene triangle? - Line with Small Compass and Straightedge [10/16/1996] Construct a line segment joining two points farther apart than either a compass or the straightedge can span. - Nine-Sided Polygon [06/11/1997] Can you construct a regular 9-sided polygon inside a circle using only a compass and straight-edge? - Octagon Construction Using Compass Only [02/22/2002] Construct the vertices of a regular octagon using just a compass. The only thing you know about the octagon is the circumradius. - A Point in the Triangle [02/12/1999] Finding the point P in a plane of triangle ABC, where PA + PB +PC is - Precision in Measurement: Perfect Protractor? [10/16/2001] Given that protractors are expected to be accurate to the degree, and in some instances the minute or second, how are angles accurately constructed and marked? - Proving Quadrilateral is a Parallelogram [11/30/2001] We are having a problem with the idea of a quadrilateral having one pair of opposite sides congruent and one pair of opposite angles congruent. - Regular Pentagon Construction Proof [11/23/2001] What is the proof of the construction of a regular pentagon? - Rotate the Square [09/19/2002] Which points on the half-circles are B and D? - Sin 20 and Transcendental Numbers [6/29/1995] What is the significance of sin 20 in geometry? - Squaring the Circle [12/22/1997] Can you construct a square at all with the same area as a circle with a - Squaring the Circle [3/16/1996] Where did the phrase "squaring the circle" come from? We found it in literature and wonder about its origins and what it means. - Straightedge and Compass Constructions [12/14/1998] Can you help me with these constructions, using only a straightedge and a compass? A 30, 60, 90 triangle, the three medians of a scalene - Triangle Construction [03/11/2002] Let ABC be a triangle with sides a, b, c. Let r be the radius of the incircle and R the radius of the circumcircle. Knowing a, R, and r, onstruct the triangle using only ruler and compass. - Triangle Construction [09/09/2001] Given a triangle ABC and point D somewhere on the triangle (not a midpoint or vertex), construct a line that bisects the area. - Triangle Construction Given an Angle, the Inradius, and the Semiperimeter [03/26/2002] Given an angle, alpha, the inradius (r), and the semi-perimeter (s), construct the triangle. - Triangle Construction Given Medians [12/12/2001] Given median lengths 5, 6, and 7, construct a triangle. - Trisecting a Line [11/03/1997] How would you trisect a line using a compass and a straight edge? - Trisecting a Line [01/25/1998] Is it possible to trisect a line? (Using propositions 1-34, Book 1 of - Trisecting a Line [01/30/1998] How do I trisect a line using only a straightedge and compass? - Trisecting a Line Segment [08/13/1999] How can I measure one-third of a line of an unknown length using a compass and a straightedge? - Trisecting an Angle [11/21/1996] Is there a proof that you can't trisect an angle? - Trisecting an Angle [06/15/1999] I've come up with a method of approximately trisecting any angle. Can you tell me how accurate it is? - Trisecting an Angle [06/17/2000] I believe I have a simple straightedge and compass construction that trisects any angle except a right angle, but have not been able to write - Trisecting an Angle [4/16/1996] I can bisect an angle easily but I can't trisect it perfectly. Would you please send me instructions? - Trisecting an Angle: Proof [6/3/1996] Is there a proof for how to trisect an angle? - Trisecting an Angle Using Compass and Straightedge [04/29/2004] A student claims he can trisect an arbitrary angle with no measuring and only a straightedge and a compass, using Geometer's Sketchpad to prove his method is correct. Doctor Math talks about why a construction alone is not enough to prove the method. - Trisecting an Angle Using the Conchoid of Nicomedes [08/16/2002] Is it possible that I could have trisected an angle using the
http://mathforum.org/library/drmath/sets/high_constructions.html?s_keyid=37887097&f_keyid=37887100&start_at=41&num_to_see=40
13
77
A geographic coordinate system enables every location on the earth to be specified, using mainly a spherical coordinate system. In Mathematics, the spherical coordinate system is a Coordinate system for representing geometric figures in three dimensions using three coordinates the radial There are three coordinates: latitude, longitude and geodesic height. The earth is not a sphere, but an irregular changing shape approximating to an ellipsoid; the challenge is to define a coordinate system that can accurately state each topographical feature as an unambiguous set of numbers. "Globose" redirects here See also Globose nucleus. A sphere (from Greek σφαίρα - sphaira, "globe An ellipsoid is a type of quadric surface that is a higher dimensional analogue of an Ellipse. Latitude (abbreviation: Lat. Latitude, usually denoted symbolically by the Greek letter phi ( Φ) gives the location of a place on Earth (or other planetary body north or south of the Latitude, usually denoted symbolically by the Greek letter phi ( Φ) gives the location of a place on Earth (or other planetary body north or south of the or (φ) pronounced phi ) is the angle from a point on the earth's surface and the equatorial plane, measured from the centre of the sphere. The equator (sometimes referred to colloquially as "the Line") is the intersection of the Earth 's surface with the plane perpendicular to the Lines joining points of the same latitude are called parallels, and they trace concentric circles on the surface of the earth, parallel to the equator. A circle of latitude, on the Earth, is an imaginary East - West circle connecting all locations (not taking into account elevation that share a given The north pole 90° N; the south pole 90° S. The North Pole, also known as the Geographic North Pole or Terrestrial North Pole, is subject to the caveats explained below defined as the point in the northern The South Pole, also known as the Geographic South Pole or Terrestrial South Pole, is the southernmost point on the surface of the Earth. The 0° parallel of latitude is designated the equator. The equator is the fundamental plane of all geographic coordinate systems. The fundamental plane in a Spherical coordinate system is a Plane which divides the Sphere into two Hemispheres The Latitude of a The equator divides the globe into the Northern and Southern Hemispheres. Longitude (abbreviation: Long. Longitude (ˈlɒndʒɪˌtjuːd or ˈlɒŋgɪˌtjuːd symbolized by the Greek character Lambda (λ is the east-west Geographic coordinate measurement or (λ)pronounced lambda) is the angle east or west of north–south line between the two geographical poles, that passes through an arbitrary point. Lines joining points of the same longitude are called meridians. This article is about the geographical concept For other uses of the word see Meridian. All meridians are halves of great circles, and are not parallel. They converge at the north and south poles. The line passing through the (former) Royal Observatory, Greenwich (near London in the UK) has been chosen as the international zero-longitude reference line, the Prime Meridian. The Royal Observatory Greenwich (formerly the Royal Greenwich Observatory or RGO) was commissioned in 1675 by King Charles II, with the The United Kingdom of Great Britain and Northern Ireland, commonly known as the United Kingdom, the UK or Britain,is a Sovereign state located The Prime Meridian is the meridian (line of Longitude) at which longitude is defined to be 0° Places to east are in the eastern hemisphere, and places to the west in the western hemisphere. The antipodal meridian of Greenwich is both 180°W and 180°E. The antipodes refer to lands and peoples located on the opposite side of the World compared to the speaker The choice of Greenwich is arbitrary, and in other cultures and times in history other locations have been used as the prime meridian. By combining these two angles, the horizontal position of any location on Earth can be specified. For example, Baltimore, Maryland (in the USA) has a latitude of 39. The United States of America —commonly referred to as the 3° North, and a longitude of 76. 6° West ( ). So, a vector drawn from the center of the earth to a point 39. 3° north of the equator and 76. 6° west of Greenwich will pass through Baltimore. This latitude/longitude "webbing" is known as the conjugate graticule. In defining an ellipse, the vertical diameter is known as the conjugate diameter, and the horizontal diameter——which is perpendicular, or "transverse", to the conjugate——is the transverse diameter. In Mathematics, an ellipse (from the Greek ἔλλειψις literally absence) is a Conic section, the locus of points in a With a sphere or ellipsoid, the conjugate diameter is known as the polar axis and the transverse as the equatorial axis. In Geometry, the semi-minor axis (also semiminor axis) is a Line segment associated with most Conic sections (that is with ellipses and In Geometry, the semi-major axis (also semimajor axis) is used to describe the dimensions of ellipses and hyperbolae The graticule perspective is based on this designation: As the longitudinal rings——geographically defined, all great circles——converge at the poles, it is the poles that the conjugate graticule is defined. Perspective (from Latin perspicere to see through in the graphic arts such as drawing is an approximate representation on a flat surface (such as paper of an image as it is perceived If the polar vertex is "pulled down" 90°, so that the vertex is on the equator, or transverse diameter, then it becomes the transverse graticule, upon which all spherical trigonometry is ultimately based (if the longitudinal vertex is between the poles and equator, then it is considered an oblique graticule). Spherical trigonometry is a part of Spherical geometry that deals with Polygons (especially Triangles on the Sphere and explains how to find relations Geographic coordinates were first used by the astronomer and geographer Ptolemy in his Geographia using alphabetic Greek numerals based on sexagesimal (base 60) Babylonian numerals. In Geometry and Trigonometry, an angle (in full plane angle) is the figure formed by two rays sharing a common Endpoint, called Claudius Ptolemaeus ( Greek: Klaúdios Ptolemaîos; after 83 &ndash ca The Geographia or Geography is Ptolemy 's main work besides the Almagest. ʹ the numeral sign redirects here For the accent ´ see Acute accent. Sexagesimal ( base-sixty) is a Numeral system with sixty as the base. Babylonian numerals were written in cuneiform, using a wedge-tipped reed Stylus to make a mark on a soft Clay tablet which would be exposed This was continued by Muslim geographers using alphabetic Abjad numerals and later via Arabic numerals. The Abjad numerals are a decimal Numeral system in which the 28 letters of the Arabic alphabet are assigned numerical values The arabic numerals (often capitalized are the ten Digits (0 1 2 3 4 5 6 7 8 9 which—along with the system In these systems a full circle is divided into 360 degrees and each degree is divided into 60 minutes. This article describes the unit of angle For other meanings see Degree. A minute of arc, arcminute, or MOA is a unit of angular measurement, equal to one sixtieth (1/60 of one degree. Although seconds, thirds, fourths, etc. A minute of arc, arcminute, or MOA is a unit of angular measurement, equal to one sixtieth (1/60 of one degree. were used by Hellenistic and Arabic astronomers, they were not used by geographers who recognized that their geographic coordinates were imprecise. Greek astronomy is the Astronomy of those who wrote in the Greek language in Classical antiquity. Arabic (ar الْعَرَبيّة (informally ar عَرَبيْ) in terms of the number of speakers is the largest living member of the Semitic language Today seconds subdivided decimally are used. A minute of arc, arcminute, or MOA is a unit of angular measurement, equal to one sixtieth (1/60 of one degree. A minute is designated by ′ or "m" and the second is designated by ″ or "s". Seconds can be expressed as a decimal fraction of a minute, and minutes can be expressed as a decimal fraction of a degree. The letters N,S, E,W can be used to indicate the hemisphere, or we can use "+" and "-" to show this. North and East are "+", and South and West are "-". Latitude and Longitude can be separated by a space or a comma. Thus there are several formats for writing degrees, all of them appearing in the same Lat,Long order. DMS is the most common format, and is standard on all charts and maps, as well as global positioning systems and geographic information systems. To completely specify a location of a topographical feature on, in, or above the earth, one has to also specify the vertical distance from the centre of the sphere, or from the surface of the sphere. Because of the ambiguity of "surface" and "vertical", it is more commonly expressed relative to a more precisely defined vertical datum such as mean sea level at a named point. This article describes a concept from Surveying and Geodesy. For other meanings see Datum (disambiguation. Mean sea level (MSL is the average (mean height of the Sea, with reference to a suitable reference surface Each country has defined its own datum. In the United Kingdom the reference point is Newlyn. Newlyn (Lulynn is a town in southwest Cornwall, England, UK. The town forms a small Conurbation with neighbouring Penzance, The distance to the earth's centre can be used both for very deep positions and for positions in space. Every point that is expressed as spherical coordinate can be expressed as a x,y z (Cartesian) coordinate. In Mathematics, the Cartesian coordinate system (also called rectangular coordinate system) is used to determine each point uniquely in a plane This is not a useful method for recording the position on maps but is used to calculate distances, and to perform other mathematic operations. The source is usually the centre of the sphere, a point close the centre of the earth. The earth is not a sphere, but an irregular changing shape approximating to a biaxial ellipsoid. The Earth ellipsoid -- especially the mean Earth ellipsoid -- is the mathematical Figure of the Earth which is used as a Reference frame for computations It is nearly spherical, but has an equatorial bulge making the radius at the equator about 0. 3% bigger than the radius measured through the poles. The shorter axis approximately coincides with axis of rotation. Map-makers choose the true ellipsoid that best fits their need for the area they are mapping. They then choose the most appropriate mapping of the spherical coordinate system onto that ellipsoid. In the United Kingdom there are three common latitude, longitude height systems in use. The system used by GPS, WGS84 differs in Greenwich from the one used on published maps OSGB36 by approximately 112m. The World Geodetic System defines a reference frame for the earth for use in Geodesy and Navigation. The British national grid reference system is a system of geographic grid references commonly used in Great Britain, different from using Latitude and Longitude The military system ED50, used by NATO is different again and gives inaccuracies of about 120m, and 180m. ED 50 ( European Datum 1950) is a geodetic datum which was defined after World War II for the international connection of Geodetic networks Though early navigators thought of the sea as a flat surface that could be used as a vertical datum, this is far from reality. The earth can be thought a series of layers of equal potential energy within its gravitational field. Height is a measurement at right angles to this surface, and though gravity pulls mainly toward the centre of the earth, the geocentre, there are local variations. The shape of these layers is irregular but essentially ellipsoidal. The choice of which of these layers to choose is arbitrary. The reference height we have chosen is the one closest to the average height of the world's oceans. This is called the Geoid. The geoid is that Equipotential surface which would coincide exactly with the mean ocean surface of the Earth if the oceans were in equilibrium at rest and extended through The earth is not static, points move relative to each other due to continental plate motion, subsidence and diurnal movement caused by the moon and the tides. The daily movement can be as much as a metre. Continental movement can be up to 10 cm a year, or 10m in a century. A weather system 'high' pressure area can cause a sinking of 5mm. The weather is a set of all the phenomena occurring in a given Atmosphere at a given Time. Scandinavia is rising by 1 cm a year as a result of the recession of the last ice age, but neighbouring Scotland is only rising by 0. Terminology and usage As a cultural term "Scandinavia" has no official definition and is subject to usage by those who identify with the culture in question as well An ice age is a period of long-term reduction in the Temperature of the Earth 's surface and atmosphere resulting in an expansion of continental Ice sheets Scotland ( Gaelic: Alba) is a Country in northwest Europethat occupies the northern third of the island of Great Britain. 2 cm. These changes are insignificant if a local datum is used. Wikipedia uses the global GPS datum so these changes are significant. On a spherical surface at sea level, one latitudinal second measures 30. Mean sea level (MSL is the average (mean height of the Sea, with reference to a suitable reference surface 82 metres and one latitudinal minute 1849 metres, and one latitudinal degree is 110. The metre or meter is a unit of Length. It is the basic unit of Length in the Metric system and in the International 9 kilometres. The circles of longitude, the meridians, meet at the geographical poles, with the west-east width of a second being dependent on the latitude. This article is about the geographical concept For other uses of the word see Meridian. A geographical pole, or geographic pole, is either of two fixed points on the surface of a spinning body or Planet, at 90 degrees from the Equator, based On the equator at sea level, one longitudinal second measures 30. The equator (sometimes referred to colloquially as "the Line") is the intersection of the Earth 's surface with the plane perpendicular to the 92 metres ,a longitudinal minute 1855 metres and a longitudinal degree111. 3 kilometres. The width of one longitudinal degree on latitude can be calculated by this formula (to get the width per minute and second, divide by 60 and 3600, respectively): where Earth's average meridional radius approximately equals 6,367,449 m. The Earth 's shape like that of all major Planets approximates a Sphere. Due to the average radius value used, this formula is of course not precise. You can get a better approximation of a longitudinal degree on latitude by: where Earth's equatorial and polar radii, equal 6,378,137 m, 6,356,752. 3 m, respectively. |Latitude||Town||Degree||Minute||Second||Decimal degree at 4 dp| |60||Saint Petersburg||55. Saint Petersburg ( tr: Sankt-Peterburg,) is a city and a federal subject of Russia located on the Neva River 65km||0. 927km||15. 42m||5. 56m| |51° 28' 38" N||Greenwich||69. Greenwich ( ˈɡrɛnɪtʃ GREN-itch /ˈɡrɛnɪdʒ/ GREN-idge or /ˈɡrɪnɪdʒ/ GRIN-idge is a district in south-east London, 29km||1. 155km||19. 24m||6. 93m| |45||Bordeaux||78. ( Gascon: Bordèu) is a port city in southwest France, with one million inhabitants in its metropolitan area at a 2008 estimate 7km||1. 31km||21. 86m||7. 87m| |30||New Orleans||96. New Orleans (nʲuːˈɔrliənz nʲuːˈɔrlənz French: La Nouvelle-Orléans) is a major United States port city and the largest city in Louisiana 39km||1. 61km||26. 77m||9. 63m| |0||Quito||111. Quito, officially San Francisco de Quito, is the Capital of Ecuador in northwestern South America. 3km||1. 855km||30. 92m||11. 13m| Latitude and longitude values can be based on several different geodetic systems or datums, the most common being the WGS 84 used by all GPS equipment, and by Wikipedia. Geodetic systems or geodetic data are used in Geodesy, Navigation, Surveying by Cartographers and Satellite navigation systems The World Geodetic System defines a reference frame for the earth for use in Geodesy and Navigation. Other datums however are significant because they were chosen by national cartographical organisation as the best method for representing their region, and these are the datum used on printed maps. Using the latitude and longitude found on a map, will not give the same reference as on a GPS receiver. Coordinates from the mapping system can be sometimes be changed into another datum using a simple translation. The expression figure of the Earth has various meanings in Geodesy according to the way it is used and the precision with which the Earth's size and shape is to be defined Translation is the interpreting of the meaning of a text and the subsequent production of an equivalent text likewise called a " translation For example to convert from ETRF89 (GPS) to the Irish Grid by 49m to the east, and subtracting 23. 4m from the north. More generally one datum is changed into any other datum using a process called Helmert transformations. The Helmert transformation (named after Friedrich Robert Helmert, 1843&ndash1917 also called a seven-parameter transformation) is a transformation method within This involves, converting the spherical coordinates into Cartesian coordinates and applying a seven parameter transformation (a translation and 3D- rotation), and converting back. Translation is the interpreting of the meaning of a text and the subsequent production of an equivalent text likewise called a " translation A rotation is a movement of an object in a circular motion A two- Dimensional object rotates around a center (or point) of rotation In popular GIS software, data projected in latitude/longitude is often specified via a 'Geographic Coordinate System'. For example, data in latitude/longitude with the datum as the North American Datum of 1983 is denoted by 'GCS_North_American_1983'. This article describes a concept from Surveying and Geodesy. For other meanings see Datum (disambiguation. The North American Datum is the official datum used for the primary Geodetic network in North America Geostationary satellites (e. A geostationary orbit (GEO is a Geosynchronous orbit directly above the Earth 's Equator (0° Latitude) with a period equal to the Earth's g. , television satellites ) are over the equator. So, their position related to Earth is expressed in longitude degrees. Their latitude does not change, and is always zero over the equator.
http://citizendia.org/Geographic_coordinate_system
13
53
|Go to Chapter 2 - The Interactive Shell||Go to Chapter 4 - Guess the Number| That's enough of integers and math for now. Python is more than just a calculator. Now let's see what Python can do with text. In this chapter, we will learn how to store text in variables, combine text together, and display them on the screen. Many of our programs will use text to display our games to the player, and the player will enter text into our programs through the keyboard. We will also make our first program, which greets the user with the text, "Hello World!" and asks for the user's name. In Python, we work with little chunks of text called strings. We can store string values inside variables just like we can store number values inside variables. When we type strings, we put them in between two single quotes ('), like this: The single quotes are there only to tell the computer where the string begins and ends (and are not part of the string value). Now, if you type spam into the shell, you should see the contents of the spam variable (the 'hello' string.) This is because Python will evaluate a variable to the value stored inside the variable (in this case, the string 'Hello'). Strings can have almost any keyboard character in them. (Strings can't have single quotes inside of them without using escape characters. Escape characters are described later.) These are all examples of strings: As we did with numerical values in the previous chapter, we can also combine string values together with operators to make expressions. You can add one string to the end of another by using the + operator, which is called string concatenation. Try entering 'Hello' + 'World!' into the shell: To keep the strings separate, put a space at the end of the 'Hello' string, before the single quote, like this: The + operator works differently on strings and integers because they are different data types. All values have a data type. The data type of the value 'Hello' is a string. The data type of the value 5 is an integer. The data type of the data that tells us (and the computer) what kind of data the value is. Until now we have been typing instructions one at a time into the interactive shell. When we write programs though, we type in several instructions and have them run all at once. Let's write our first program! The name of the program that provides the interactive shell is called IDLE, the Interactive DeveLopement Environment. IDLE also has another part called the file editor. Click on the file editor.menu at the top of the Python Shell window, and select . A new blank window will appear for us to type our program in. This window is the Figure 3-1: The file editor window. A tradition for programmers learning a new language is to make their first program display the text "Hello world!" on the screen. We'll create our own Hello World program now. When you enter your program, don't enter the numbers at the left side of the code. They're there so we can refer to each line by number in our explanation. If you look at the bottom-right corner of the file editor window, it will tell you which line the cursor is currently on. Enter the following text into the new file editor window. We call this text the program's source code because it contains the instructions that Python will follow to determine exactly how the program should behave. (Remember, don't type in the line numbers!) IMPORTANT NOTE! The following program should be run by the Python 3 interpreter, not the Python 2.6 (or any other 2.x version). Be sure that you have the correct version of Python installed. (If you already have Python 2 installed, you can have Python 3 installed at the same time.) To download Python 3, go to http://python.org/download/releases/3.1.1/ and install this version. The IDLE program will give different types of instructions different colors. After you are done typing this code in, the window should look like this: Figure 3-3: The file editor window will look like this after you type in the code. Once you've entered your source code, save it so that you won't have to retype it each time we start IDLE. To do so, choose the File menu at the top of the File Editor window, and then click on hello.py in the File Name box then press . (See Figure 3-4.). The Save As window should open. Enter You should save your programs every once in a while as you type them. That way, if the computer crashes or you accidentally exit from IDLE, only the typing you've done since your last save will be lost. Press Ctrl-S to save your file quickly, without using the mouse at all. A video tutorial of how to use the file editor is available from this book's website at http://inventwithpython.com/videos/. If you get an error that looks like this: ...then this means you are running the program with Python 2, instead of Python 3. You can either install Python 3, or convert the source code in this book to Python 2. Appendix A lists the differences between Python 2 and 3 that you will need for this book. To load a saved program, choose hello.py and press the button. Your saved hello.py program should open in the File Editor window.. Do that now, and in the window that appears choose Now it's time to run our program. From the File menu, chooseor just press the F5 key on your keyboard. Your program should run in the shell window that appeared when you first started IDLE. Remember, you have to press F5 from the file editor's window, not the interactive shell's window. When your program asks for your name, go ahead and enter it as shown in Figure 3-5: Figure 3-5: What the interactive shell looks like when running the "Hello World" program. Now, when you push Enter, the program should greet you (the user) by name. Congratulations! You've written your first program. You are now a beginning computer programmer. (You can run this program again if you like by pressing F5 again.) How does this program work? Well, each line that we entered is an instruction to the computer that is interpreted by Python in a way that the computer will understand. A computer program is a lot like a recipe. Do the first step first, then the second, and so on until you reach the end. Each instruction is followed in sequence, beginning from the very top of the program and working down the list of instructions. After the program executes the first line of instructions, it moves on and executes the second line, then the third, and so on. We call the program's following of instructions step-by-step the flow of execution, or just the execution for short. Now let's look at our program one line at a time to see what it's doing, beginning with line number 1. This line is called a comment. Any text following a # sign (called the pound sign) is a comment. Comments are not for the computer, but for you, the programmer. The computer ignores them. They're used to remind you of what the program does or to tell others who might look at your code what it is that your code is trying to do. Programmers usually put a comment at the top of their code to give their program a title. The IDLE program displays comments in red to help them stand out. A function is kind of like a mini-program inside your program. It contains lines of code that are executed from top to bottom. Python provides some built-in functions that we can use. The great thing about functions is that we only need to know what the function does, but not how it does it. (You need to know that the print() function displays text on the screen, but you don't need to know how it does this.) A function call is a piece of code that tells our program to run the code inside a function. For example, your program can call the print() function whenever you want to display a string on the screen. The print() function takes the string you type in between the parentheses as input and displays the text on the screen. Because we want to display Hello world! on the screen, we type the print function name, followed by an opening parenthesis, followed by the 'Hello world!' string and a closing parenthesis. This line is a call to the print function, usually written as print() (with the string to be printed going inside the parentheses). We add parentheses to the end of function names to make it clear that we're referring to a function named print(), not a variable named print. The parentheses at the end of the function let us know we are talking about a function, much like the quotes around the number '42' tell us that we are talking about the string '42' and not the integer 42. Line 3 is another print() function call. This time, the program displays "What is your name?" This line has an assignment statement with a variable (myName) and a function call (input()). When input() is called, the program waits for input; for the user to enter text. The text string that the user enters (your name) becomes the function's output value. Like expressions, function calls evaluate to a single value. The value that the function call evaluates to is called the return value. (In fact, we can also use the word "returns" to mean the same thing as "evaluates".) In this case, the return value of the input() function is the string that the user typed in-their name. If the user typed in Albert, the input() function call evaluates to the string 'Albert'. The function named input() does not need any input (unlike the print() function), which is why there is nothing in between the parentheses. On the last line we have a print() function again. This time, we use the plus operator (+) to concatenate the string 'It is good to meet you, ' and the string stored in the myName variable, which is the name that our user input into the program. This is how we get the program to greet us by name. Once the program executes the last line, it stops. At this point it has terminated or exited and all of the variables are forgotten by the computer, including the string we stored in myName. If you try running the program again with a different name, like Carolyn, it will think that's your name. Remember, the computer only does exactly what you program it to do. In this, our first program, it is programmed to ask you for your name, let you type in a string, and then say hello and display the string you typed. But computers are dumb. The program doesn't care if you type in your name, someone else's name, or just something dumb. You can type in anything you want and the computer will treat it the same way: The computer doesn't care what you name your variables, but you should. Giving variables names that reflect what type of data they contain makes it easier to understand what a program does. Instead of name, we could have called this variable abrahamLincoln or nAmE. The computer will run the program the same (as long as you consistently use abrahamLincoln or nAmE). Variable names (as well as everything else in Python) are case-sensitive. Case-sensitive means the same variable name in a different case is considered to be an entirely separate variable name. So spam, SPAM, Spam, and sPAM are considered to be four different variables in Python. They each can contain their own separate values. It's a bad idea to have differently-cased variables in your program. If you stored your first name in the variable name and your last name in the variable NAME, it would be very confusing when you read your code weeks after you first wrote it. Did name mean first and NAME mean last, or the other way around? If you accidentally switch the name and NAME variables, then your program will still run (that is, it won't have any syntax errors) but it will run incorrectly. This type of flaw in your code is called a bug. It is very common to accidentally make bugs in your programs while you write them. This is why it is important that the variable names you choose make sense. It also helps to capitalize variable names if they include more than one word. If you store a string of what you had for breakfast in a variable, the variable name whatIHadForBreakfastThisMorning is much easier to read than whatihadforbreakfastthismorning. This is a convention (that is, an optional but standard way of doing things) in Python programming. (Although even better would be something simple, like todaysBreakfast. Capitalizing the first letter of each word in variable names makes the program more readable. Now that we have learned how to deal with text, we can start making programs that the user can run and interact with. This is important because text is the main way the user and the computer will communicate with each other. The player will enter text to the program through the keyboard with the input() function. And the computer will display text on the screen when the print() function is executed. Strings are just a different data type that we can use in our programs. We can use the + operator to concatenate strings together. Using the + operator to concatenate two strings together to form a new string is just like using the + operator to add two integers to form a new integer (the sum). In the next chapter, we will learn more about variables so that our program will remember the text and numbers that the player enters into the program. Once we have learned how to use text, numbers, and variables, we will be ready to start creating games. |Go to Chapter 2 - The Interactive Shell||Go to Chapter 4 - Guess the Number|
http://inventwithpython.com/chapter3.html
13
52
The systematic study of electrical and magnetic forces began in the late Eighteenth Century. Electrically charged objects were produced by rubbing one substance with another. Two kinds of charge were observed, like charges repel one another, and unlike charges attract. Benjamin Franklin performed experiments with electrically charged pith balls, which led Priestley and Cavendish to try to prove that the electric force was an inverse square force just like the gravitational force. Coulomb proved by his experiments that the force between two unlike charges is indeed inverse to the square of their distance apart, and along the line joining them. The attraction has exactly the form of gravitational attraction between masses. At some stage someone got the idea of describing electrical forces through the notion of an electric field. This is defined as the force per unit charge that a small fictitious charged object would experience from a given distribution of charges when placed at the argument of the field. The surface area of a sphere is proportional to its radius squared, while the electric field of a point charge is inversely proportional to that same quantity. Therefore the integral of the normal component of an electric field around a spherical surface is independent of the radius of the sphere, and only measures the strength of the charge at its center. We call the integral of the normal component of a vector W over any surface the "flux of W through the surface". The remark above can be stated as: the flux of electric field through the surface of a sphere containing a charge at its center is proportional only to the amount of charge and is a constant times the amount of that charge. Gauss generalized this statement to apply to the surface of any region containing the charge by means of the divergence theorem, which he discovered. This theorem implies that the flux of electric field through the boundary of any region of space is an appropriate constant multiplied by the amount of electric charge in the region. Around the turn of the Nineteenth Century, Volta invented the battery, and it became practical for people to produce currents of electricity. Oersted and Ampere discovered in about 1820 that electric currents produce forces that cause magnetized needles to line up in a direction tangential to a circle about the wire. Ampere, in particular, discovered that the magnetic force on such needles produced by a long straight wire carrying electric current is proportional to the current flow and inversely proportional to the distance of the needle from the wire. The circumference of a circle is proportional to its radius and the magnetic field just described is inversely proportional to radius. Therefore the integral around the circumference of any circle around the wire of the component of magnetic field in the direction of the path is independent of its radius. It is an appropriate constant times the "flux" or flow of current through the wire for any such circle. We define the integral of the component of a vector field W in the direction of a path around a closed path to be the circulation of W around that path. Ampere's law can then be stated as: the circulation of magnetic field around a wire is a constant times the flux of current through it. Faraday got the idea that if electric current flux causes magnetic circulation, then there should be some sort of reciprocity: magnetic flux ought to be able to cause electric current circulation. In 1831, after looking for such an effect, he discovered his celebrated law of induction: that changing magnetic flux through a surface S produces a circulation of electric field on its boundary. This means it produces a "difference in electrical potential" around the boundary path of S which means a charged particle in a wire around it will have work done on it in moving around the wire. This will make electric current flow in a wire around the surface, and that current is a constant times the derivative of the magnetic flux through (any) surface bounded by the wire. By increasing and decreasing the amount of current in one wire, you can make its magnetic force oscillate, which will cause current to flow back and forth in another wire. Somewhere in the middle of the century Stokes discovered his mathematical theorem relating flux of a curl a vector field W on a surface S to the circulation of W around the boundary of S. Maxwell used this fact to prove that consistency of the equations of electricity and magnetism requires a modification of Ampere's Law when there are changing electric fields. With this modification he noted (in circa 1862) that electric and magnetic fields can display wave-like behavior even in the absence of matter, and he asserted that the phenomenon of light consisted of exactly such waves. His claims produce a prediction of the velocity of light, which had only recently been measured, and it agreed with that measurement precisely. His celebrated differential equations describing the behavior of electromagnetic fields were published in 1874. Maxwell's discoveries were distinguished by being entirely theoretical. He utilized the mathematical implications of Stokes' Theorem rather than an experiment to discover his "displacement current" whose presence made possible his identification of light with electromagnetism. The idea behind Maxwell's discovery is this: according to Stokes' theorem, the flux of the curl of a vector field W through a surface is its circulation around the boundary of the surface. By this theorem, the flux of the curl any vector field must be the same through any two surfaces with the same boundary. Faraday's discovery allowed people to produce electric current that oscillates by moving magnets near wires, or (equivalently) by moving wires near magnets (as in Davis and Kidder's Therapeutic Device, patented in 1854). Induction of current in one wire from the change in current in another assumes that the changing current in the first produces a changing magnetic field which produces the current in the second according to Faraday's law. But now suppose we have a gap in the second wire. Current will flow in it until charge builds up across the gap, and this current will produce a magnetic field of its own. If the current in both wires is made to oscillate, that is, to flow back and forth as a sine function of time, current will flow much of the time despite the gap, and if the frequency of the sine is large enough, there will be little charge build up at the gap at any time, and little "impedance" to current flow in the wire. According to Ampere's Law (applied to a non-steady state current situation) there will be oscillating "magnetic circulation" around the wire, from the oscillating current flow in the wire in this situation. But if we take a surface that passes through the wire and deform it to make it pass through the gap instead of the wire, there will be no current through it! Then Ampere's law would say there was no magnetic circulation on the same path in contradiction to the previous statement. A circle around the wire with a gap can be filled by a surface that passes through the wire or else by distorting that surface keeping its boundary the same, into one that goes only goes through the gap. The integral of flux of a curl must be the same for both surfaces. There must therefore be something in the gap that, like current, contributes to the flux of the curl of the magnetic field there. Maxwell concluded that the current flux could not possibly be the flux of the curl of the magnetic field under these circumstances. Ampere's law, which describes steady state current flow adequately must be modified when current flow is time dependent! The current flux with a given boundary will be different depending on whether we pass our surface through the wire or through the gap. The flux of the curl of the magnetic field must be the same in both. If the curl of the magnetic field is to be flux of current in the wire it must be something else in the gap and that something else must have the same flux. The only thing we know about in the gap is that it contains the changing electric field caused by the charge oscillations on its faces. Maxwell postulated that consistency requires an additional time dependent term in Ampere's law proportional to the flux of the time derivative of this electric field. This term, which he called "displacement current" produces remarkable symmetry in the resulting equations. When written as differential equations, the laws of Gauss, Ampere with Maxwell's modification, and Faraday have the consequence that electric and magnetic fields obey "the wave equation" in the absence of matter, and suggest that there can be waves of electric and magnetic field, which waves move with a finite velocity. Maxwell's assertion that light is a form of such wave motion implies a particular finite velocity of light that can be deduced from electric and magnetic phenomena. The symmetries of his equations include not only rotations in ordinary space, but also transformations which mix space with time, called Lorentz transformations. In 1888, Hertz actually created electromagnetic waves and detected them in his laboratory. He had connected a coil of wire and small gap, ran current through the wire until the field on the condenser caused a spark; the resulting oscillations of current produced waves that were observable on another similar circuit. Marconi got the notion from this that such waves could be used for communication by causing current to flow in a distant wire. In the 1890's he set up apparatus to transmit signals over ever widening distances, and by 1901was able to send telegraphic signals across the Atlantic Ocean that could be and were received and used for wireless communication. The physical laws involved in these subjects are few in number and can be stated in a few lines. We will now consider their mathematical implications in terms of the concepts of vector calculus.
http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter28/section01.html
13
90
Mechanics: Newton's Laws of Motion Newton's Laws of Motion: Problem Set Overview This set of 30 problems targets your ability to distinguish between mass and weight, determine the net force from the values of the individual forces, relate the acceleration to the net force and the mass, analyze physical situations to draw a free body diagram and solve for an unknown quantity (acceleration or individual force value), and to combine a Newton's second law analysis with kinematics to solve for an unknown quantity (kinematic quantity or a force value). Problems range in difficulty from the very easy and straight-forward to the very difficult and complex. The more difficult problems are color-coded as blue problems. Mass versus Weight This set of 30 problems targets your ability to distinguish between mass and weight, determine the net force from the values of the individual forces, relate the acceleration to the net force and the mass, analyze physical situations to draw a free body diagram and solve for an unknown quantity (acceleration or individual force value), aMass is a quantity which is dependent upon the amount of matter present in an object; it is commonly expressed in units of kilograms. Being the amount of matter possessed by an object, the mass is independent of its location in the universe. Weight, on the other hand, is the force of gravity with which the Earth attracts an object towards itself. Since gravitational forces vary with location, the weight of an object on the Earth's surface is different than its weight on the moon. Being a force, weight is most commonly expressed in the metric unit as Newtons. Every location in the universe is characterized by a gravitational field constant represented by the symbol g (sometimes referred to as the acceleration of gravity). Weight (or Fgrav) and mass (m) are related by the equation: Fgrav = m • g Newton's Second Law of Motion Newton's second law of motion states that the acceleration (a) experienced by an object is directly proportional to the net force (Fnet) experienced by the object and inversely proportional to the mass of the object. In equation form, it could be said that a = Fnet/m. The net force is the vector sum of all the individual force values. If the magnitude and direction of the individual forces are known, then these forces can be added as vectors to determine the net force. Attention must be given to the vector nature of force. Direction is important. An up force and a down force can be added by assigning the down force a negative value and the up force a positive value. In a similar manner, a rightward force and a leftward force can be added by assigning the leftward force a negative value and the rightward force a positive value. The a = Fnet/m equation can be used as both a formula for problem solving and as a guide to thinking. When using the equation as a formula for problem solving, it is important that numerical values for two of the three variables in the equation be known in order to solve for the unknown quantity. When using the equation as a guide to thinking, thought must be given to the direct and inverse relationships between acceleration and the net force and mass. A two-fold or a three-fold increase in the net force will cause the same change in the acceleration, doubling or tripling its value. A two-fold or three-fold increase in the mass will cause an inverse change in the acceleration, reducing its value by a factor of two or a factor of three. Free Body Diagrams Free body diagrams represent the forces which act upon an object at a given moment in time. The individual forces which act upon an object are represented by vector arrows. The direction of the arrows indicate the direction of the force and the approximate length of the arrow repesents the relative magnitude of the force. The forces are labeled according to their type. A free body diagram can be a useful aid in the problem-solving process. It provides a visual representation of the forces exerted upon an object. If the magnitudes of all the individual forces are known, the diagram can be used to determine the net force. And if the acceleration and the mass are known, then the net force can be calculated and the diagram can be used to determine the value of a single unknown force. Coefficient of Friction An object which is moving (or event attempting to move) across a surface encounters a force of friction. Friction force results from the two surfaces being pressed together closely, causing intermolecular attractive forces between molecules of different surfaces. As such, friction depends upon the nature of the two surfaces and upon the degree to which they are pressed together. The friction force can be calculated using the equation: Ffrict = µ• Fnorm The symbol µ (pronounced "mew") represents of the coefficient of friction and will be different for different surfaces. Blending Newton's Laws and the Kinematic Equations Kinematics pertains to a description of the motion of an object and focuses on questions of how far?, how fast?, how much time? and with what acceleration? To assist in answering such questions, four kinematic equations were presented in the One-Dimensional Kinematics unit. The four equations are listed below. - d = vo • t + 0.5 • a • t2 - vf = vo + a • t - vf2 = vo 2 + 2 • a • d - d = (vo + vf)/ 2 • t - d = displacement - t = time - a = acceleration - vo = original or initial velocity - vf = final velocity Newton's laws and kinematics share one of these questions in common: with what acceleration? The acceleration (a) of the Fnet = m•a equation is the same acceleration of the kinematic equations. Common tasks thus involve: - using kinematics information to determine an acceleration and then using the acceleration in a Newton's laws analysis, or - using force and mass information to determine an acceleration value and then using the acceleration in a kinematic analysis. When analyzing a physics word problem, it is wise to identify the known quantities and to organize them as either kinematic quantities or as F-m-a type quantities. Habits of an Effective Problem-Solver An effective problem solver by habit approaches a physics problem in a manner that reflects a collection of disciplined habits. While not every effective problem solver employs the same approach, they all have habits which they share in common. These habits are described briefly here. An effective problem-solver... - ...reads the problem carefully and develops a mental picture of the physical situation. If needed, they sketch a simple diagram of the physical situation to help visualize it. - ...identifies the known and unknown quantities in an organized manner, often times recording them on the diagram iteself. They equate given values to the symbols used to represent the corresponding quantity (e.g., vo = 0 m/s, a = 2.67 m/s/s, vf = ???). - ...plots a strategy for solving for the unknown quantity; the strategy will typically center around the use of physics equations be heavily dependent upon an understaning of physics principles. - ...identifies the appropriate formula(s) to use, often times writing them down. Where needed, they perform the needed conversion of quantities into the proper unit. - ...performs substitutions and algebraic manipulations in order to solve for the unknown quantity. Additional Readings/Study Aids: The following pages from The Physics Classroom tutorial may serve to be useful in assisting you in the understanding of the concepts and mathematics associated with these problems. - Mass and Weight - Newton's Second Law - Determining Acceleration From Force Values - Determining Individual Force Values - Friction Force - The Kinematic Equations
http://www.physicsclassroom.com/calcpad/newtlaws/index.cfm
13
56
Foundations of Mathematics Summer Program , Columbia University July 2001 -- Week #4 Notes Roger B. Blumberg From Counting to Probabilities Last week, we learned about permutations and combinations and then tried to generalize the "counting" or "subset making" approach to as many different kinds of problems as we could. Here's another use for the binomial coefficient: Finally, although our discussion of Pascal's Triangle showed how the mathematics of counting can help us identify patterns, you might wonder how the mathematics of combinations can help us with proofs. Here's one example: Prove that the product of any five consecutive integers is divisible by 5! While we could of course prove this by induction, it may be easier to prove it directly: Take a moment to digest that, think about how we could generalize this to show that the product of any k consecutive integers will be divisible by k!, and then we'll move on to the transition from counting to calculating probabilities. 1. Suppose you are dealt 5 cards from a fair deck of 52. How many different ways are there to get a straight? What is the probability of getting a straight? The difference between the answer to the first question (C(9,1)*[C(4,1)^5]) and the second question will just be that in the second case we put the answer over C(52,5), which gives the total number of 5 card hands that are possible. This leads to the basic definition of the probability of a discrete event, E,: 2. Suppose that two students are chosen from this class at random. How many different ways might one chose two female students? If the two students are chosen at random, what is the probability that both students are male? That exactly one is male and one is female? Here again we use binomial coefficients to calculate both the number of successful cases and the total number of possible cases. Thus, if we have a class with 12 females and 8 males, the probability that we have chosen two male students is: C(8,2)/C(20,2). A. The Concepts of Discrete Probability We begin by distinguishing elementary, or simple, events from complex events. For example, if we roll a fair die the event "roll a 4" is simple, while the event "roll an odd number" is complex (i.e. made up of more than one simple events). Now, in addition to the basic definition of probability above we have the following fundamental ideas: Conceptualizing the outcome space (i.e. the set of possible outcomes) is perhaps the most important step in calculating (discrete) probabilities. Consider the calculation of p(prime) in the roll of one die; p(sum=10) in the roll of two dice; p(3H and 2T) in five flips of a coin. Suppose we reconsider the examples if the dice are "loaded", with p(i) = i/21. This may seem trivial, but once we realize that it implies that p(E) = 1 - p(~E) it is extremely powerful. Example 1: Suppose you roll four fair dice. What is the probability that the sum is less than 24? Once we realize that p(sum<24) + p(sum=24) = 1, the answer is easily computed. Example 2: Suppose you have ten addressed envelopes and ten personalized invitations, and the invitations are sorted into the envelopes at random; what is the probability that at least one invitation is in the wrong envelope? Suppose two fair dice are rolled: What is the probability their sum is prime or that you've rolled doubles? Suppose you roll a die and flip a coin. What is the probability you rolled a 5 and flipped a Tail? Notice how the definition captures the size of the outcome space in the denominator. B. Repeated trials and the mathematics of counting. Which of the following strings of coin flips is most likely if the coin is fair?: Then why, given ten flips of a fair coin, do we think 5H & 5T any more likely than 10H? Given 12 rolls of a fair die, why is 2 of each outcome more likely than 12 fours? We realize that, for repeated trials, the probability of a complex event will have to take into account both the number of ways the event can occur and the probability of each instance of the event as follows: We modify the binomial formula to accommodate this insight, by realizing that now each of the "bins" or "subsets" can be represented as having a probability attached to it. Thus, for n repeated flips of a coin, the probability of getting r heads is: We can also modify the multinomial formula, and find that for n repeated rolls of a die, the probability of getting exactly: r(1) 1s; r(2) 2s; r(3) 3s; r(4) 4s; r(5) 5s; and r(6) 6s is: For example, suppose we go back to the insects we were talking about at the start of the unit on counting. Suppose we have 20 ants, 30 bees and 50 roaches all in one box, and we choose 10 insects at random (all at once). What is the probability we choose 2 ants, 3 bees, and 5 roaches? Before calculating, do you think the probability greater than .5? Do you think this outcome the most likely to occur? How can you reconcile the small probability with the fact that it is the most likely outcome? We now have a mathematical theory of probability which incorporates our theories about counting. C. Conditional Probability and Bayes Theorem Recognizing the importance of the outcome space in all of our calculations, it is easy to understand why having information restricts the outcome space allows us to make more accurate probability judgements. Consider how your answers would differ in the following cases: Clearly the outcome space is more restricted in the second case, and thus our answer in the second case is different than in the first. (Aside: can you think of a piece of information that would have made us judge the possibility lower in the second case?) We can use this example to derive a formula for conditional probability: Finally, we can build on this idea and come up with a method for dealing with cases in which a particular event can occur in any of several subsets. For example, consider a high school population in which some of the members of each grade are smokers. Suppose we want to figure out the probability of choosing a smoker if we randomly choose a student from the school. Bayes Theorem states that if a finite outcome space is completely divided into disjoint subsets (e.g. s, s, s, ..., s[n]), then for any event, E, in the outcome space: A. In order to review the basic of probability from last time, and the way the counting methods we studied last week are integrated into combinatorial probability we look at four problems: B. In preparation for the final exam, in past years I've solicited questions from students. Here are some of the kinds of questions that were asked. Unlike the permutation problems involving repeated letters, we cannot assume that every particular combination of three letters will be duplicated the same number of times. For example, C(13,3) would not count the combination MCH more than once, but would count MCT twice, and would count MSC four times. Therefore, we have two choices: 1) calculate C(13,3) and subtract the number of combinations we've counted more than once; or 2) realize that, since there are only 7 different letters in MASSACHUSETTS, we can calculate C(7,3) and then add on all those combinations we left out. If we choose the second approach, we see that the combinations we've left out are just those with two As (there are 6 of these), two Ss (there are 6 of these too), two Ts (again, there are 6), and the single combination of SSS. I. Since the case of n=1 doesn't make much sense (what is C(1,2)?), we consider n=2. We get C(4,2) = 2 * C(2,2) + 2^2 = 2 * 1 + 4 = 6, which equals C(4,2). II. We want to show that (2n+2)!/(2n!*2!) - [2n!/((2n-2)!*2!)] = (n+1)^2 - n^2 + 2(C(n+1,2) - C(n,2)) (2n+2)!/(2n!*2!) - [2n!/((2n-2)!*2!)] = 2n +1 + 2(C(n+1,2) - C(n,2)) Finding common denominators and expanding the left side, and rewriting the combination of (n+1) elements on the right side, in terms of the combination of n elements (recalling the equality from Pascal's Triangle), we get: (4n^2 + 6n + 2)/ 2 - [(4n^2 - 2n)/2] = 2n+1+2(n) (8n + 2)/2 = 2n + 1 + 2n = 4n+1 This is a clear example of repeated trials, and so the probability must take into account both the number of ways we can get 6 heads in 8 flips, and the probability of each way. We therefore calculate p(6H and 2T) = C(8,6) * (1/2)^8. Of course, we could have used C(8,2) in this calculation as well. Another case of repeated trials, but here we need to distinguish between the probability of two outcomes: 3 and ~3. Every successful outcome of eight rolls (i.e. every outcome that contains exactly three heads) will contain five "not-3" rolls. Therefore, the probability of each successful outcome is p(3)^3 * P(~3)^5 = (1/6)^3 * (5/6)^5. Thus the probability of getting exactly three heads in eight rolls is: C(8,3) * (1/6)^3 * (5/6)^5. Although intuitions differ about the answer, if we consider the differences in the size of the outcome spaces in each experiment, and remember that the sum of all the possible outcomes must add to 1, it will (may?) become clear that the case with the smaller number of flips has the higher probability. Now consider which of the following has higher probability: What makes this comparison different than the first? Is it surprising to find that the case with the larger number of flips has the higher probability here? Now to the final exam.
http://cs.brown.edu/~rbb/summermath/GS2001.wk4.html
13
52
You’ve heard all about DNA. It’s sort of like your body’s musical score. But what actually makes the music? Proteins. Inside your cells, as many as a million proteins may be at work — making things happen, folding into different structures, changing each other. These proteins work together to create a normally-functioning cell much like the sounds from different instruments come together to create a well-played piece of music. For scientists, figuring out an organism’s proteome is like taking apart a symphony. Which instrument is responsible for that high note? What combination of sounds produces perfect harmony? Your genes, fixed at birth, could be called your body’s musical score. The DNA code instructs the body to make various proteins. But deep inside your cells, those proteins are carrying out your body’s functions — making its music. “Proteins are the molecular machines that actually do the job,” says Alex Tropsha, associate professor of pharmacy. Proteins fold into various structures, act upon each other, change in response to other parts of the cell. Proteins are so numerous and so busy that figuring out an organism’s proteome — the properties and activities of each of its proteins — promises to be thousands of times more complicated than figuring out its genome. Proteomics, as this new field is called, emerged only in the last five years, after scientists sequenced the genomes of humans, fruit flies, yeast, and other creatures. Proteomics wouldn’t be possible without those genome definitions. An essential technique of proteomics — mass spectrometry — basically helps identify proteins by smashing them up and looking at their pieces. It separates proteins into charged particles, then “weighs” those particles. The weight, or mass, of each particle yields a “fingerprint” that a computer can match to a database of amino acids, which are the building blocks of proteins. The amino acid information can be matched to all known gene sequences to identify the protein. “You couldn’t do proteomics without the genome sequences,” says Bill Marzluff, professor of biochemistry. “That’s why proteomics has become important recently.” The more familiar field of genomics and its burgeoning offshoot, proteomics, are helping scientists learn about how cells carry on the business of our bodies. But the more these two fields tell us, the more questions we have. Probably less than 50 percent of the time do genomics and proteomics agree, says Lee Graves, associate professor of pharmacology. For instance, some studies show that while levels of mRNA (an intermediate stage between DNA and protein) increase, the amount of protein produced actually decreases. So just because a gene codes for a protein doesn’t necessarily mean that the protein gets made. Scientists hope that as the field of proteomics grows, it will provide more answers. “Proteomics allows us to skip over the complexity of gene regulation and look directly at changes in the proteins,” Graves says. “This is one of the reasons why proteomics is now so popular.” Carolina is getting into the proteomics field while it’s new, thanks to an anonymous $25 million donation in honor of the late Chancellor Michael Hooker. The gift funded a new proteomics facility and equipment. “It has allowed all of us here to do experiments and accomplish things we couldn’t have done before,” Marzluff says. “Proteomics means a new tool to try to solve problems that people have been working on for twenty years in some cases.” For example, Richard Boucher, director of Carolina’s Cystic Fibrosis Center, is applying proteomics to understand the protein that is known to be defective in Cystic Fibrosis (see Mapping Disease). And Jackson Stutts, associate professor of medicine, is leading a team that received a $1.79 million grant from the Cystic Fibrosis Foundation to apply proteomics to understanding the disease. Carolina departments involved in proteomics research include nearly all the departments in the medical and health sciences as well as chemistry and biology. Proteomics promises to yield great insight into the workings of our bodies, but that insight won’t come easy. Because proteins don’t work alone, analyzing them will require digesting an amazing overload of information. Small modifications that happen to proteins can mean big changes in function. The addition of chemical groups such as phosphates or methyl groups, for instance, can be required for a protein to function, or they can make another protein stop working. Proteins can also modify each other. The possible combinations and outcomes boggle the mind. Analyzing, storing, and retrieving these vast amounts of information will take some high-tech tools. At Carolina, Christoph Borchers, faculty director of Carolina’s Proteomics Core Facility, is the keeper of those tools. Scientists perfecting proteomics technology are working toward achieving “high throughput” — analyzing as many proteins as fast as possible. Each week Borchers is getting new modifications for the facility; by the time you’re reading this article, the equipment will be able to analyze 9,600 samples at once. Carolina is one of five U.S.“validation sites” that are testing some of the newest equipment before it’s made available commercially. But proteomics technology in general still has some growing to do. Compare the rate of proteomics analysis — 9,600 samples at once — to the fastest genomics analysis — 20,000 samples. Proteins present so many more complications than genes that the technology has yet to catch up. Some researchers believe that combining genomics and proteomics will yield the best results. Scientists at UNC’s Center for Genomic Sciences are beginning to collaborate with scientists using proteomics tools such as crystallography and protein modeling to help guide their work, says Terry Magnuson, director of the center. The classic way of learning about genes’ functions is to make a sequence variation — commonly called a mutation — in a particular gene and then observe the effect on say, a mouse or a fly. Magnuson says, “We’d like to get to the point where if we make a mutation in a gene, we can ask the question ahead of time, ‘what do we think that mutation is going to do to that protein structure, and do we even want to make a mouse out of it?’” Tropsha and John Sondek, assistant professor of pharmacology, help Magnuson answer that question using two different approaches to studying protein structure. The way that proteins fold in on themselves, the shapes and patterns they make, often determines protein function, and learning more about how folding happens can help scientists find proteins that would make good targets for drugs. Tropsha uses computer modeling to predict protein structure. Sondek takes the experimental route, using crystallography to actually examine and see a protein’s structure. Charles Perou, assistant professor of genetics, is also beginning to combine genetics with proteomics. In studying breast cancer tumors, Perou uses microscopy and mRNA to create images known as microarrays, which show 20,000 fluorescent spots representing the expression of as many as 19,000 genes. Perou uses the microarrays to help him classify tumors into various groups. “We had twenty patients with tumors that were all lumped into one group, and the microarray information has helped us divide those tumors into five different groups,” he says. Perou knows, for example, that one group of tumors is resistant to treatment and presents a poor prognosis, while another group responds well to treatment. Now Perou is beginning to work with Borchers to get similar information about proteins involved in these tumors to help define the groups even further, with the hope of developing specific treatments for each group. Marzluff says, “There aren’t too many places that can say they can do both microarrays and proteomics real well right now.” Another area that may benefit from proteomics: mouse genetics. Marzluff says, “With all the mouse genetics we have here, as the mouse proteome becomes available for analysis we would be in a real position to become even more of a leader in mouse biology.” And if Carolina stays in the lead of the high-throughput race, he adds, “I think we could easily become one of the leading proteomics centers in the country.” An Orchestra of Proteins Imagine listening to multiple pieces of music simultaneously and trying to identify each instrument and the part that it’s playing. Proteomics researchers face a similar challenge as they struggle to determine the role of each protein in the body. Proteomics researchers work to determine what proteins are present in a cell, where they are located, how much of each protein is present, and how the proteins function. But because cells are dynamic, the protein constitution of a single cell is constantly changing, so some proteins may not even be present at certain stages of the cell cycle. If you add to this the fact that proteins are continuously modified, the task proteomics seeks to accomplish seems insurmountable. But Christoph Borchers, assistant professor of biochemistry and biophysics and faculty director of Carolina’s Proteomics Core Facility, believes that mass spectrometry (MS) is just the technology for the job. He and his colleagues have assembled a state-of-the-art automated facility, offering the latest way to look at the entire protein environment of a cell. “With this technique you can listen to an entire orchestra of proteins,” Borchers says. To understand Borchers’ excitement, you need to understand MS. First, a molecule is vaporized and charged, or ionized. Until 5 to 10 years ago, ionization and vaporization of fragile molecules were the Achilles heels of biological MS. Traditional techniques used heat, which could destroy peptides or proteins. But today, gentle ionization and vaporization techniques can safely transport charged peptides and proteins into the gas phase. The mass spectrometer then measures each ion’s time of flight — the time it takes for the ion to reach the detector. The time of flight is affected by both an ion’s charge and its weight or mass. For instance, ions with lower weights will travel faster and will have a shorter time of flight. So the time of flight is used to determine each ion’s mass-to-charge ratio, which scientists can then use to identify the peptide or protein in question. Borchers explains that MS has three distinct advantages over existing technology. MS is extremely accurate, able to distinguish a protein that weighs 1000.03 Daltons (a Dalton is the unit for protein weight) from one that weighs 1000.04 Daltons. MS is also extremely sensitive, requiring very little protein material. One of the three instruments at the Proteomics Core Facility is able to detect a femtamole of protein — a feat similar to detecting the addition of one drop of water to your backyard pool. This level of sensitivity allows researchers to use the native protein from cells instead of synthetic protein. Finally, MS is able to provide sequence data that can be used to identify unknown proteins. Identifying all of the amino acids and their order in a given protein is referred to as protein sequencing. This process starts with digestion of the protein with an enzyme that systematically chops up the protein into smaller peptide fragments. The molecular weights of the peptides are then measured by MS so accurately that the protein can be identified by searching these masses against a protein or genome database. Like the unique grooves and coloring of jigsaw pieces, the masses of amino acid fragments indicate how the fragments can be pieced together to reveal the whole protein. Borchers is also using MS to characterize proteins. “I call it proteomics, the second generation,” he says, noting that “protein characterization is still not easy.” Protein modifications such as phosphorylation and glycosylation attach additional chemical groups onto the protein, increasing its weight — but not by much. Low-weight modifications are difficult to detect by mass spectrometry. And, many of the modifications result in a negative charge on the protein, also making it difficult to detect. “What proteomics needs to do, finally, is identify not only expressed proteins, but also identify and characterize altered proteins,” Borchers says. A collaboration with the Lineberger Comprehensive Cancer Center seeks to work on some of those questions. “What we want to do is characterize the proteins of the human breast cancer cell,” Borchers says. At 25,000 to 30,000 proteins, the breast cancer cell will certainly be a proving ground for proteomics. If it weren’t for Carolina’s commitment to proteomics, Michael Giddings, assistant professor of microbiology and immunology and of biomedical engineering, might not have come to Carolina. Giddings’ postdoctoral work at the University of Utah got him thinking about the problems of current proteomics methods. “Most people assume that if you can identify the gene that encodes the protein, then the work is done. But there are many things that can happen that alter protein production. The relationship between gene and protein is not linear,” Giddings says. The Giddings lab develops computer software that will gather, calculate, and analyze data. “These data will potentially allow us to trace the entire pathway from protein to gene,” Giddings says. “This would give a better biochemical picture of how proteins are derived from the genome and also provide more information about the functional role of these proteins.” One program — “Proclaim,” developed by lab technician Mark Holmes — uses the molecular weight of a protein, generated by mass spectrometry, to search a set of possible modifications that will allow this protein to match up with a protein mass listed in a database of such masses. The Giddings lab is also building a system that can analyze multiple kinds of mass spectrometry measurements and plug these into a data-analysis system. This Protein Inference Engine (PIE) will use the data to map back to the gene to figure out what pathways are used and when. In collaboration with Janne Cannon, professor of microbiology, Giddings is beginning to use this new technology to map the human pathogen Neisseria gonorrhoeae. This organism causes the sexually transmitted disease gonorrhea and billions of dollars in health care costs. N. gonorrhoeae produces variable surface proteins that interact with a host’s cells. According to Giddings, this “barrage of different looks” helps N. gonorrhoeae evade the body’s natural defenses. “Understanding how this organism works will lead to new drugs and treatments that will reduce human suffering,” Giddings says. The system could also be applied to other bacteria and agents that could be used in bioterrorism. In addition, Giddings collaborates with experts studying cystic fibrosis and developmental psychologists in the study of juvenile behavior. Giddings uses high-performance liquid chromatography (HPLC) to simplify the protein mixes that feed into mass spectrometry. Giddings’ group is one of only a few labs that look at intact proteins first. They also use enzymes that cut the protein into smaller pieces. The process produces a banding pattern called a peptide mass fingerprint that is specific to each protein. Giddings hopes to eventually take one of the peptide mass fingerprints and scan the human genome database to figure out what part of the genome might have expressed this pattern. Giddings has high hopes for proteomics and sees it as one of a set of integrated technologies that analyzes a cell in its entirety. Now scientists can model only small portions of a cell, but eventually they will be able to model a complete cell using supercomputers. Giddings says, “As quickly as computer technology is advancing, it’s likely that we’ll be able to model at least some of the simpler cells in maybe five to ten years.” Where the Protein Leads Science has a way of leading researchers down different paths. Carol Otey, assistant professor of cell and molecular physiology, studies cell adhesion and motility and the pathways that regulate the cell shape. Her work is an excellent example of “following where the science leads.” As a postdoctoral fellow in Keith Burridge’s lab, Otey discovered a protein that plays a key role in organizing the actin cytoskeleton. Actin is an abundant cellular protein that forms the filaments that give the cell its shape. This new protein, which she named palladin, is involved in actin assembly. “There’s no evidence that palladin binds to actin directly. Instead, it binds to multiple things that bind to actin,” Otey says. Otey studies palladin function in fibroblasts (cells that give rise to connective tissue), neurons (nerve cells), and glia (support cells for neurons). When there is a cellular signal to change shape, palladin protein levels increase. There are a number of situations in which a cell will change shape. For example, tumor cells grow uncontrollably, but within the tumor there is a subpopulation of traveling tumor cells called metastatic cells, which have a different shape. This shape change involves the actin cytoskeleton and palladin. “If we can find a way to specifically interfere with metastasis, cancer would be a treatable disease,” Otey says. “We would like to see whether there is a difference in palladin in normal, tumor, and metastatic cells.” In addition to understanding cancer, Otey wants to know what goes wrong when there is an injury to the brain or spinal cord. After a severe injury to the central nervous system (CNS) — the brain and spinal cord — the body experiences a permanent loss of nerve function, but after an injury to the peripheral nervous system (such as a cut in the skin, severing a sensory nerve), the nerves will recover. Why do nerves in the peripheral nervous system recover better than those in the CNS? One explanation is that when there is an injury to the CNS, neurons are cut and star-shaped glia cells called astrocytes migrate to the area and form a net around the site of injury. If the neuron survives the injury, it is unable to create new connections because it can’t punch through the astrocytes. This obstacle is called a glial scar and is a phenomenon of the CNS rather than the peripheral nervous system. This is why the CNS doesn’t heal well. “The glial scar involves astrocyte motility and shape change, but the molecular events that control glial scar formation are not understood. If we could understand how these changes arise, we could possibly manipulate the scar and prevent them from forming,” Otey says. Otey’s lab uses standard cell biology techniques to manipulate palladin expression in cell culture. She can artificially add palladin by introducing its DNA into cells or inhibit palladin by introducing antisense palladin DNA, which blocks the production of palladin. Using fluorescence microscopy or video microscopy, Otey can observe how the actin cytoskeleton is organized. Otey’s observations show that in normal cells, three hours after an injury, palladin levels increase and cluster along the injury site. Also, cells lacking palladin lose their shape. Because palladin appears to play an important role in metastatic cell movement and glial scar formation, these studies have far-reaching implications for cancer and spinal cord injury research. Repetition breeds familiarity. In only a few short years, images of the spiraling and twisted DNA double helix have become firmly established as the iconic darling for the genome sciences, much like the candy-striped red and white pole had been to men’s barbershops everywhere. Yes, one might say the double helix does contain an almost kitschy purity, evocative of a landmark scientific achievement few of us fully comprehend. Still, the image of the double helix remains only a representation of the molecule. “It would surprise a lot of people to know that DNA is not just two linear strands. It’s wrapped around histone proteins to form a highly folded complex called chromatin,” says Brian Strahl, assistant professor of biochemistry and biophysics. This complex of nucleic acids and proteins binds DNA into higher-order structures, ultimately forming a chromosome. The core histones appear in all organisms that have nucleated cells, including yeast and mammals. Four core histone proteins (H2A, H2B, H3, H4) each contain a “head,” or globular domain, and an amino “tail.” Of interest to Strahl is that these histones, specifically processes that modify them, are thought to play a major role in controlling gene expression and cell division. Another image. Think of chromatin’s structure as a telephone cord with a bead between each coil. Each bead represents a nucleosome — chromatin’s fundamental repeating unit consisting of DNA wrapped twice around the four histone “core” proteins, their tails wagging and sometimes touching outside the nucleosome. Meanwhile, a fifth histone (H1) serves as a “linker histone” between nucleosomes. Now fold the cord on itself again and again. This image approximates the scientifically known. Stretches of nucleosomes are folded upon themselves to create higher-order chromatin structures, albeit still not well defined. Although the chromatin packaging allows efficient storage of genetic information (the length of the entire complement of 46 chromosomes in a human cell is about one meter), it also impedes a wide range of cell processes, including access to DNA by transcription factors — the proteins that regulate gene expression. In other words, DNA must become unblocked to allow its information to be read and to produce messenger RNA (mRNA), which in turn must exit the nucleus and become translated into a protein product. How that might occur — how DNA becomes more accessible to transcription factors — is currently an area of intense research scrutiny, including at UNC-Chapel Hill. Strahl, Yi Zhang, assistant professor of biochemistry at the Lineberger Com-prehensive Cancer Center, and others have modified the older view of many scientists that histones play a passive role in chromosomal architecture, a view of histones as primarily structural, packaging DNA into chromatin fibers while having little to do with gene regulation. Independently — Strahl working with yeast cells and Zhang with mammalian cells — the two are discovering that histones play a more dynamic role in chromatin, namely, its loosening or tightening. The researchers’ attention is focused on histone methylation, the addition of a methyl group to lysine, one of the amino acids that comprise the tail region of histone molecules. “We’ve known for three decades that histones can be methylated, but nobody knew the identity of any of the enzymes responsible for this methylation until two years ago,” Zhang says. That was when the first such enzyme was identified which specifically methylates histone H3 at lysine 9. Its presence there was linked to chromosome areas of gene silencing or inactivation. Zhang’s lab has since identified the enzyme SET7, which specifically modifies lysine 4 on the histone H3 tail. This modification makes the chromatin structure more open so other proteins can access particular genes, Zhang says. Moreover, methylation of the same histone at lysine 4 and lysine 9 have opposite effects. Thus, according to Zhang, methylation at either site could determine either gene activation or gene silencing. Still, the situation is probably more complex than that. Among the possibilities, SET7 could have functioning partners yet unidentified, Zhang says. He recently reported discovering another two enzymes, and his lab is intensely studying their functions. For his recent entry into histone modification, Strahl and former colleagues at the University of Virginia identified and characterized Set2, a novel histone that is responsible for methylating lysine 36 on the H3 tail. However, this modification helps to repress or silence gene transcription. Thus, Set2 might be “a coregulator of transcription” in the sense that it turns genes “off” instead of “on,” as in the case of SET7. “During development, you have different sets of genes that are important for, say, limb formation, and when the limbs are completed, the genes responsible for them must be turned off,” Strahl says. It may well be that methylation and other modifications are part of an emerging “histone code” of modifications that ultimately regulate gene expression. Strahl and his former mentor at the University of Virginia, David Allis, postulated such a code in a 2000 paper in the journal Nature. This code would be in addition to the now familiar genetic code of repeating As, Cs, Gs, and Ts of DNA nucleotide sequences. Through this histone code, differentially modified histone proteins could organize the genome into stretches of active and silent regions. Moreover, these regions would be inherited during cell division. “We believe that methylation and other modifications that affect histone proteins, including acetylation and phosphorylation, are all dynamically involved and play critical roles in gene activation and deactivation at the appropriate times,” Strahl says. This process, he explained, possibly could work by the ability of these modifications to bring in additional proteins that result in opening or closing of the chromatin molecule. “Yi and I, as well as other labs, are at the frontier of understanding about the enzymes that are so important for the dynamic regulation of chromatin,” Strahl says. “Our findings add to our knowledge of a basic and very important process in human biology. They could offer new insight as to why certain genes in cancer are inappropriately expressed and how that might be corrected.” Meanwhile, in Bill Marzluff’s sprawling and bustling Fordham Hall laboratory, the histone focus is even more basic than trying to tease out chromatin dynamics. “We work on the histone messenger RNA level — DNA’s blueprint for histone proteins — its regulation, processing, transport, and degradation,” graduate student Judy Erkmann says. Her work explores how histone mRNA is transported from the cell nucleus to the cytoplasm. The focus on histone mRNA stems largely from the Marzluff team’s 1996 discovery and cloning of the stem-loop binding protein, SLBP. This unique protein is a major regulatory player in histone mRNA. It latches onto the looped tail of histone mRNA and signals the synthesis of histone proteins crucial to cell functioning during embryogenesis and throughout the organism’s life. But SLBP does more than simply hitch a ride on a loop of nucleotides. It also doggedly performs a string of important duties after it takes the mRNA into a specific region of the nucleus. It interacts with other proteins to make sure the mRNA is properly processed into its final form. And then after helping get it out to the cytoplasm — the cell’s factory floor — SLBP remains bound to histone mRNA, making sure that its instructions are properly translated. “A lot of people work on histones, but just a few groups in the world actually work on understanding how histone mRNA is synthesized and regulated,” says research assistant professor Zbigniew Dominski, whose own work focuses on how mRNA is processed and matures into a translatable message. “In this lab we cover all the different steps of histone mRNA message metabolism.” “And that’s very important because the synthesis of histone proteins depends strictly on metabolism of the mRNA,” says Ricardo Sanchez, the lab’s newest Ph.D. recipient. Histone mRNA translation and regulation remain his major research interest. And what if the metabolic regulatory process goes awry? Recent findings in Drosophila — fruit flies — by Robert Duronio, associate professor of biology, in collaboration with Marzluff, highlight the possibilities at the edge of life’s beginnings. Mutated or non-functioning SLBP is associated with failure to develop beyond the early embryo. A similar outcome may well apply to other multicellular creatures, including us. From cystic fibrosis to space rats, proteomics is allowing researchers to uncover the secrets of disease. Genome, transcriptome, and proteome are all fancy terms used by scientists to describe the separate populations of molecules that contribute to our beings, determining who we are. What color eyes we have, how fast we can run, if we are going to have diabetes — it’s all in our genes. With the completion of the human genome project came the hope that the molecular links to what makes us tick would emerge from the three billion As, Ts, Cs, and Gs that constitute our DNA. With this promise still unfulfilled, many scientists are turning to proteomics to study the products of our genes — proteins — and how they change structure, interact with each other, and give rise to disease. According to Lee Graves, associate professor of pharmacology, proteomics is where the action is. “Proteins do everything,” Graves says. “They catalyze the metabolic functions of the cell, they break your glucose down to give you energy, they transmit signals from the outside of the cell to the inside of the cell.” While DNA is a static store of information, proteins are constantly changing, undergoing modifications, being depleted and degraded, acting differently in different tissues. Even though this complexity makes proteomics more challenging to study than genomics, it also confers the potential to greatly increase the understanding of human disease. Proteomics can be used to discover proteins that are associated with diseases and determine which of these can serve as novel targets for drug development or as biological markers of human disorders. Proteomics enables scientists to study how proteins function in healthy cells as well as what goes wrong when disease strikes. Cystic fibrosis (CF), a common hereditary disease affecting approximately 30,000 children and adults in the United States, is just one of the many disorders that can potentially benefit from proteomics research. Even the identification of the cystic fibrosis gene over a dozen years ago has not led to an effective treatment. Although CF is caused by a defect in just one protein, known as cystic fibrosis transmembrane regulator (CFTR), it creates a variety of symptoms including faulty digestion due to a deficiency of pancreatic enzymes, difficulty in breathing due to mucus accumulation in airways, and excessive loss of salt in the sweat — all caused by one faulty protein. “To try to understand why a protein works differently in a sweat duct than in an airway, we need to know all the partners, all the other proteins,” says Richard Boucher, director of the Cystic Fibrosis Center. According to Boucher, CF is caused not only by the absence of the CF protein’s function, but also by the absence of crucial interactions with other proteins in the cell. “There are ways of looking at those tissues and essentially identifying and cataloguing all the proteins in the tissues and then asking which ones are involved in CFTR,” he says. Boucher is using proteomics to identify the various parts and determine how they interact with the CF protein to form a functional cell. “Once we know how the system is wired together — that is, what wires or connections are missing because of the missing CFTR protein — then we can reconnect the system or jumpstart it with drugs,” Boucher says. Muscle atrophy is another disorder that proteomics can help us understand. Diabetes, aging, and dystrophies such as muscular dystrophy all result in muscle wasting. Scientists at NASA are working with Graves to study the muscle atrophy experienced by astronauts after extended missions in space. “We are collaborating with a NASA group that has an animal model system to mimic weightlessness, and what we are trying to do is to apply modern methods of proteomics and mass spectrometry to analyze the changes,” Graves says. One of the well-observed responses to atrophy is a change in protein expression. By comparing protein levels in normal and atrophic muscle, scientists can profile these differences and determine which proteins are involved in muscle wasting. Tom Hilder, a graduate student in pharmacology, and Jun Han, a postdoctoral fellow, are both working in Graves’ lab to identify the important proteins so that therapies can be developed to hinder atrophy. In addition to the discovery of novel drug targets, proteomics can also be used to identify biomarkers, which are specific profiles that indicate the severity of disease. “For cystic fibrosis, we really need to know how bad the infection and destruction is in the lung at any given time,” Boucher says. The current methods to assess the severity of disease — symptoms, chest X rays, and blood tests — are not very helpful, especially in children. Finding markers for lung inflammation and infection would help researchers decide when to initiate therapy and would help in clinical trials of new therapies. Margaret Leigh, professor of pediatrics, is collaborating with Boucher to locate these biomarkers by collecting one to two hundred serum samples from CF patients and then comparing the data with those from normal subjects. Using large 2-D gels and bioinformatics, Boucher can identify which of the 12,000 serum proteins vary between normal subjects and people with CF and of these, which ones increase or decrease with the severity of lung disease. These biomarkers can be a great tool for pediatricians treating CF babies, who are born with normal lungs at birth and do not get their first infection until somewhere in the first few years of life. Pediatricians can use biomarkers to detect when that first infection occurs and initiate treatment with antibiotics to irradicate the infection before it begins to damage the lungs. The same biomarkers that track the severity of infection and inflammation of the lung can then be used to determine if a new drug ameliorates the patient’s condition. According to Boucher, this approach has already produced results with other disorders. “I think that there have been great successes with biomarkers and using proteomics approaches, predominately in cancer.” Graves is working with other scientists to apply proteomics to cancer research. Carolina’s Lineberger Comprehensive Cancer Center has breast, prostate, and colorectal cancer projects in the works. “We can profile, or look at a patient’s sample, and say that the expression of this protein correlates with metastatic breast cancer or prostate cancer,” Graves says. “If we screen out these different patients, we can find something that is now expressed more highly in people that are showing advanced cancer versus those with early cancer or no cancer at all. As a pharmacologist or a biochemist, we can start to design drugs to attack the problem at the molecular level.” Although proteomics entails fishing through thousands of proteins to find just a handful of useful candidates for drug targets or biomarkers, Graves doesn’t mind a bit. “There is a fair amount of fishing in science,” he says. “You just have to have the right net to make sure you catch something at the end of the day.”
http://endeavors.unc.edu/fall2002/music_of_proteomics.html?inline=true
13
53
The Federal Reserve System is the central bank of the United States. It was founded by Congress in 1913 to provide the nation with a safer, more flexible, and more stable monetary and financial system. Over the years, its role in banking and the economy has expanded. The Federal Reserve’s duties fall into four general areas: - Conducting the nation’s monetary policy by influencing the monetary and credit conditions in the economy in pursuit of maximum employment, stable prices, and moderate long-term interest rates - Supervising and regulating banking institutions to ensure the safety and soundness of the nation’s banking and financial system and to protect the credit rights of consumers - Maintaining the stability of the financial system and containing systemic risk that may arise in financial markets - Providing financial services to depository institutions, the U.S. government, and foreign official institutions, including playing a major role in operating the nation’s payments system Most developed countries have a central bank whose functions are broadly similar to those of the Federal Reserve. During the nineteenth century and the beginning of the twentieth century, financial panics plagued the nation, leading to bank failures and business bankruptcies that severely disrupted the economy. The failure of the nation’s banking system to effectively provide funding to troubled depository institutions contributed significantly to the economy’s vulnerability to financial panics. Short-term credit is an important source of liquidity when a bank experiences unexpected and widespread withdrawals during a financial panic. A particularly severe crisis in 1907 prompted Congress to establish the National Monetary Commission, which put forth proposals to create an institution that would help prevent and contain financial disruptions of this kind. After considerable debate, Congress passed the Federal Reserve Act “to provide for the establishment of Federal reserve banks, to furnish an elastic currency, to afford means of rediscounting commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes.” President Woodrow Wilson signed the act into law on December 23, 1913. Structure of the System The Federal Reserve implements monetary policy through its control over the federal funds rate—the rate at which depository institutions trade balances at the Federal Reserve. It exercises this control by influencing the demand for and supply of these balances through the following means: - Open market operations—the purchase or sale of securities, primarily U.S. Treasury securities, in the open market to influence the level of balances that depository institutions hold at the Federal Reserve Banks - Reserve requirements—requirements regarding the percentage of certain deposits that depository institutions must hold in reserve in the form of cash or in an account at a Federal Reserve Bank - Contractual clearing balances—an amount that a depository institution agrees to hold at its Federal Reserve Bank in addition to any required reserve balance - Discount window lending—extensions of credit to depository institutions made through the primary, secondary, or seasonal lending programs Federal Reserve Banks |B||New York||Buffalo, New York| |D||Cleveland||Cincinnati, Ohio Pittsburgh, Pennsylvania| |E||Richmond||Baltimore, Maryland Charlotte, North Carolina| |F||Atlanta||Birmingham, Alabama Jacksonville, Florida Miami,Florida Nashville, Tennessee New Orleans, Louisiana| |H||St. Louis||Little Rock, Arkansas Louisville, Kentucky Memphis, Tennessee| |J||Kansas City||Denver, Colorado Oklahoma City, Oklahoma Omaha, Nebraska| |K||Dallas||El Paso, Texas Houston, Texas San Antonio, Texas| |L||San Francisco||Los Angeles, California Portland, Oregon Salt Lake City,Utah Seattle, Washington| The nation’s commercial banks can be divided into three types according to which governmental body charters them and whether or not they are members of the Federal Reserve System. Those chartered by the federal government (through the Office of the Comptroller of the Currency in the Department of the Treasury) are national banks; by law, they are members of the Federal Reserve System. Banks chartered by the states are divided into those that are members of the Federal Reserve System (state member banks) and those that are not (state nonmember banks). State banks are not required to join the Federal Reserve System, but they may elect to become members if they meet the standards set by the Board of Governors. As of March 2004, of the nation’s approximately 7,700 commercial banks approximately 2,900 were members of the Federal Reserve System—approximately 2,000 national banks and 900 state banks. The Federal Reserve System uses advisory committees in carrying out its varied responsibilities. Three of these committees advise the Board of Governors directly: - Federal Advisory Council. This council, which is composed of twelve representatives of the banking industry, consults with and advises the Board on all matters within the Board’s jurisdiction. It ordinarily meets four times a year, as required by the Federal Reserve Act. These meetings are held in Washington, D.C., customarily on the first Friday of February, May, September, and December, although occasionally the meetings are set for different times to suit the convenience of either the council or the Board. Annually, each Reserve Bank chooses one person to represent its District on the Federal Advisory Committee, and members customarily serve three one-year terms and elect their own officers. - Consumer Advisory Council. This council, established in 1976, advises the Board on the exercise of its responsibilities under the Consumer Credit Protection Act and on other matters in the area of consumer financial services. The council’s membership represents the interests of consumers, communities, and the financial services industry. Members are appointed by the Board of Governors and serve staggered three-year terms. The council meets three times a year in Washington, D.C., and the meetings are open to the public. - Thrift Institutions Advisory Council. After the passage of the Depository Institutions Deregulation and Monetary Control Act of 1980, which extended to thrift institutions the Federal Reserve’s reserve requirements and access to the discount window, the Board of Governors established this council to obtain information and views on the special needs and problems of thrift institutions. Unlike the Federal Advisory Council and the Consumer Advisory Council, the Thrift Institutions Advisory Council is not a statutorily mandated body, but it performs a comparable function in providing firsthand advice from representatives of institutions that have an important relationship with the Federal Reserve. The council meets with the Board in Washington, D.C., three times a year. The members are representatives from savings and loan institutions, mutual savings banks, and credit unions. Members are appointed by the Board of Governors and generally serve for two years. The Federal Reserve Banks also use advisory committees. Of these advisory committees, perhaps the most important are the committees (one for each Reserve Bank) that advise the Banks on matters of agriculture, small business, and labor. Biannually, the Board solicits the views of each of these committees by mail. Monetary Policy and the Economy The Federal Reserve sets the nation’s monetary policy to promote the objectives of maximum employment, stable prices, and moderate long-term interest rates. The challenge for policy makers is that tensions among the goals can arise in the short run and that information about the economy becomes available only with a lag and may be imperfect. Goals of Monetary Policy The goals of monetary policy are spelled out in the Federal Reserve Act, which specifies that the Board of Governors and the Federal Open Market Committee should seek “to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates.” Stable prices in the long run are a precondition for maximum sustainable output growth and employment as well as moderate long-term interest rates. When prices are stable and believed likely to remain so, the prices of goods, services, materials, and labor are undistorted by inflation and serve as clearer signals and guides to the efficient allocation of resources and thus contribute to higher standards of living. Moreover, stable prices foster saving and capital formation, because when the risk of erosion of asset values resulting from inflation—and the need to guard against such losses—are minimized, households are encouraged to save more and businesses are encouraged to invest more. How Monetary Policy Affects the Economy The initial link in the chain between monetary policy and the economy is the market for balances held at the Federal Reserve Banks. Depository institutions have accounts at their Reserve Banks, and they actively trade balances held in these accounts in the federal funds market at an interest rate known as the federal funds rate. The Federal Reserve exercises considerable control over the federal funds rate through its inf luence over the supply of and demand for balances at the Reserve Banks. The FOMC sets the federal funds rate at a level it believes will foster financial and monetary conditions consistent with achieving its monetary policy objectives, and it adjusts that target in line with evolving economic developments. A change in the federal funds rate, or even a change in expectations about the future level of the federal funds rate, can set off a chain of events that will affect other short-term interest rates, longer-term interest rates, the foreign exchange value of the dollar, and stock prices. In turn, changes in these variables will affect households’ and businesses’ spending decisions, thereby affecting growth in aggregate demand and the economy. Short-term interest rates, such as those on Treasury bills and commercial paper, are affected not only by the current level of the federal funds rate but also by expectations about the overnight federal funds rate over the duration of the short-term contract. As a result, short-term interest rates could decline if the Federal Reserve surprised market participants with a reduction in the federal funds rate, or if unfolding events convinced participants that the Federal Reserve was going to be holding the federal funds rate lower than had been anticipated. Similarly, short-term interest rates would increase if the Federal Reserve surprised market participants by announcing an increase in the federal funds rate, or if some event prompted market participants to believe that the Federal Reserve was going to be holding the federal funds rate at higher levels than had been anticipated. Limitations of Monetary Policy Monetary policy is not the only force acting on output, employment, and prices. Many other factors affect aggregate demand and aggregate supply and, consequently, the economic position of households and businesses. Some of these factors can be anticipated and built into spending and other economic decisions, and some come as a surprise. On the demand side, the government inf luences the economy through changes in taxes and spending programs, which typically receive a lot of public attention and are therefore anticipated. For example, the effect of a tax cut may precede its actual implementation as businesses and households alter their spending in anticipation of the lower taxes. Also, forward-looking financial markets may build such fiscal events into the level and structure of interest rates, so that a stimulative measure, such as a tax cut, would tend to raise the level of interest rates even before the tax cut becomes effective, which will have a restraining effect on demand and the economy before the fiscal stimulus is actually applied. Other changes in aggregate demand and supply can be totally unpredictable and inf luence the economy in unforeseen ways. Examples of such shocks on the demand side are shifts in consumer and business confidence, and changes in the lending posture of commercial banks and other creditors. Lessened confidence regarding the outlook for the economy and labor market or more restrictive lending conditions tend to curb business and household spending. On the supply side, natural disasters, disruptions in the oil market that reduce supply, agricultural losses, and slowdowns in productivity growth are examples of adverse supply shocks. Such shocks tend to raise prices and reduce output. Monetary policy can attempt to counter the loss of output or the higher prices but cannot fully offset both. In practice, as previously noted, monetary policy makers do not have up-to-the-minute information on the state of the economy and prices. Useful information is limited not only by lags in the construction and availability of key data but also by later revisions, which can alter the picture considerably. Guides to Monetary Policy Although the goals of monetary policy are clearly spelled out in law, the means to achieve those goals are not. Among those frequently mentioned are monetary aggregates, the level and structure of interest rates, the so-called Taylor rule, and foreign exchange rates. Some suggest that one of these guides be selected as an intermediate target—that is, that a specific formal objective be set for the intermediate target and pursued aggressively with the policy instruments. Monetary aggregates have at times been advocated as guides to monetary policy on the grounds that they may have a fairly stable relationship with the economy and can be controlled to a reasonable extent by the central bank, either through control over the supply of balances at the Federal Reserve or the federal funds rate. An increase in the federal funds rate (and other short-term interest rates), for example, will reduce the attractiveness of holding money balances relative to now higher-yielding money market instruments and thereby reduce the amount of money demanded and slow growth of the money stock. There are a few measures of the money stock—ranging from the transactions-dominated M1 to the broader M2 and M3 measures, which include other liquid balances—and these aggregates have different behaviors. The Components of the Monetary Aggregates The Federal Reserve publishes data on three monetary aggregates. The first, M1, is made up of types of money commonly used for payment, basically currency and checking deposits. The second, M2, includes M1 plus balances that generally are similar to transaction accounts and that, for the most part, can be converted fairly readily to M1 with little or no loss of principal. The M2 measure is thought to be held primarily by households. The third aggregate, M3, includes M2 plus certain accounts that are held by entities other than individuals and are issued by banks and thrift institutions to augment M2-type balances in meeting credit demands; it also includes balances in money market mutual funds held by institutional investors. The aggregates have had different roles in monetary policy as their reliability as guides has changed. The following details their principal components: - Currency (and traveler’s checks) - Demand deposits - NOW and similar interest-earning checking accounts - Savings deposits and money market deposit accounts - Small time deposits - Retail money market mutual fund balances - Large time deposits - Institutional money market mutual fund balances - Repurchase agreements Interest rates have frequently been proposed as a guide to policy, not only because of the role they play in a wide variety of spending decisions but also because information on interest rates is available on a real-time basis. Arguing against giving interest rates the primary role in guiding monetary policy is uncertainty about exactly what level or path of interest rates is consistent with the basic goals of monetary policy. The appropriate level of interest rates will vary with the stance of fiscal policy, changes in the pattern of household and business spending, productivity growth, and economic developments abroad. It can be difficult not only to gauge the strength of these forces but also to translate them into a path for interest rates. The slope of the yield curve (that is, the difference between the interest rate on longer-term and shorter-term instruments) has also been suggested as a guide to monetary policy. Whereas short-term interest rates are strongly inf luenced by the current setting of the policy instrument, longer-term interest rates are inf luenced by expectations of future short-term interest rates and thus by the longer-term effects of monetary policy on inflation and output. For example, a yield curve with a steeply positive slope (that is, longer-term interest rates far above short-term rates) may be a signal that participants in the bond market believe that monetary policy has become too expansive and thus, without a monetary policy correction, more inf lationary. Conversely, a yield curve with a downward slope (short-term rates above longer rates) may be an indication that policy is too restrictive, perhaps risking an unwanted loss of output and employment. However, the yield curve is also inf luenced by other factors, including prospective fiscal policy, developments in foreign exchange markets, and expectations about the future path of monetary policy. Thus, signals from the yield curve must be interpreted carefully. The Taylor Rule The “Taylor rule,” named after the prominent economist John Taylor, is another guide to assessing the proper stance of monetary policy. It relates the setting of the federal funds rate to the primary objectives of monetary policy—that is, the extent to which inflation may be departing from something approximating price stability and the extent to which output and employment may be departing from their maximum sustainable levels. For example, one version of the rule calls for the federal funds rate to be set equal to the rate thought to be consistent in the long run with the achievement of full employment and price stability plus a component based on the gap between current inflation and the inflation objective less a component based on the shortfall of actual output from the full-employment level. If inflation is picking up, the Taylor rule prescribes the amount by which the federal funds rate would need to be raised or, if output and employment are weakening, the amount by which it would need to be lowered. The specific parameters of the formula are set to describe actual monetary policy behavior over a period when policy is thought to have been fairly successful in achieving its basic goals. Although this guide has appeal, it too has shortcomings. The level of short-term interest rates associated with achieving longer-term goals, a key element in the formula, can vary over time in unpredictable ways. Moreover, the current rate of inf lation and position of the economy in relation to full employment are not known because of data lags and difficulties in estimating the full-employment level of output, adding another layer of uncertainty about the appropriate setting of policy. Foreign Exchange Rates Exchange rate movements are an important channel through which monetary policy affects the economy, and exchange rates tend to respond promptly to a change in the federal funds rate. Moreover, information on exchange rates, like information on interest rates, is available continuously throughout the day. Interpreting the meaning of movements in exchange rates, however, can be difficult. A decline in the foreign exchange value of the dollar, for example, could indicate that monetary policy has become, or is expected to become, more accommodative, resulting in inf lation risks. But exchange rates respond to other inf luences as well, notably developments abroad; so a weaker dollar on foreign exchange markets could instead ref lect higher interest rates abroad, which make other currencies more attractive and have fewer implications for the stance of U.S. monetary policy and the performance of the U.S. economy. Conversely, a strengthening of the dollar on foreign exchange markets could ref lect a move to a more restrictive monetary policy in the United States—or expectations of such a move. But it also could ref lect expectations of a lower path for interest rates elsewhere or a heightened perception of risk in foreign financial assets relative to U.S. assets. Some have advocated taking the exchange rate guide a step further and using monetary policy to stabilize the dollar’s value in terms of a particular currency or in terms of a basket of currencies. However, there is a great deal of uncertainty about which level of the exchange rate is most consistent with the basic goals of monetary policy, and selecting the wrong rate could lead to a protracted period of def lation and economic slack or to an overheated economy. Also, attempting to stabilize the exchange rate in the face of a disturbance from abroad would short-circuit the cushioning effect that the associated movement in the exchange rate would have on the U.S. economy. The Implementation of Monetary Policy The Federal Reserve exercises considerable control over the demand for and supply of balances that depository institutions hold at the Reserve Banks. In so doing, it influences the federal funds rate and, ultimately, employment, output, and prices. The Federal Reserve implements U.S. monetary policy by affecting conditions in the market for balances that depository institutions hold at the Federal Reserve Banks. The operating objectives or targets that it has used to effect desired conditions in this market have varied over the years. At one time, the FOMC sought to achieve a specific quantity of balances, but now it sets a target for the interest rate at which those balances are traded between depository institutions—the federal funds rate. By conducting open market operations, imposing reserve requirements, permitting depository institutions to hold contractual clearing balances, and extending credit through its discount window facility, the Federal Reserve exercises considerable control over the demand for and supply of Federal Reserve balances and the federal funds rate. Through its control of the federal funds rate, the Federal Reserve is able to foster financial and monetary conditions consistent with its monetary policy objectives. The Market for Federal Reserve Balances The Federal Reserve inf luences the economy through the market for balances that depository institutions maintain in their accounts at Federal Reserve Banks. Depository institutions make and receive payments on behalf of their customers or themselves in these accounts. The end-of-day balances in these accounts are used to meet reserve and other balance requirements. If a depository institution anticipates that it will end the day with a larger balance than it needs, it can reduce that balance in several ways, depending on how long it expects the surplus to persist. For example, if it expects the surplus to be temporary, the institution can lend excess balances in financing markets, such as the market for repurchase agreements or the market for federal funds. In the federal funds market, depository institutions actively trade balances held at the Federal Reserve with each other, usually overnight, on an uncollateralized basis. Institutions with surplus balances in their accounts lend those balances to institutions in need of larger balances. The federal funds rate—the interest rate at which these transactions occur—is an important benchmark in financial markets. Daily f luctuations in the federal funds rate ref lect demand and supply conditions in the market for Federal Reserve balances. Demand for Federal Reserve Balances The demand for Federal Reserve balances has three components: required reserve balances, contractual clearing balances, and excess reserve balances. - Required Reserve Balances Required reserve balances are balances that a depository institution must hold with the Federal Reserve to satisfy its reserve requirement. Reserve requirements are imposed on all depository institutions—which include commercial banks, savings banks, savings and loan associations, and credit unions—as well as U.S. branches and agencies of foreign banks and other domestic banking entities that engage in international transactions. Since the early 1990s, reserve requirements have been applied only to transaction deposits, which include demand deposits and interest-bearing accounts that offer unlimited checking privileges. An institution’s reserve requirement is a fraction of such deposits; the fraction—the required reserve ratio—is set by the Board of Governors within limits prescribed in the Federal Reserve Act. A depository institution’s reserve requirement expands or contracts with the level of its transaction deposits and with the required reserve ratio set by the Board. In practice, the changes in required reserves ref lect movements in transaction deposits because the Federal Reserve adjusts the required reserve ratio only infrequently. A depository institution satisfies its reserve requirement by its holdings of vault cash (currency in its vault) and, if vault cash is insufficient to meet the requirement, by the balance maintained directly with a Federal Reserve Bank or indirectly with a pass-through correspondent bank (which in turn holds the balances in its account at the Federal Reserve). The difference between an institution’s reserve requirement and the vault cash used to meet that requirement is called the required reserve balance. If the balance maintained by the depository institution does not satisfy its reserve balance requirement, the deficiency may be subject to a charge. - Contractual Clearing Balances Depository institutions use their accounts at Federal Reserve Banks not only to satisfy their reserve balance requirements but also to clear many financial transactions. Given the volume and unpredictability of transactions that clear through their accounts every day, depository institutions seek to hold an end-of-day balance that is high enough to protect against unexpected debits that could leave their accounts overdrawn at the end of the day and against any resulting charges, which could be quite large. If a depository institution finds that targeting an end-of-day balance equal to its required reserve balance provides insufficient protection against overdrafts, it may establish a contractual clearing balance (sometimes referred to as a required clearing balance). A contractual clearing balance is an amount that a depository institution agrees to hold at its Reserve Bank in addition to any required reserve balance. In return, the depository institution earns implicit interest, in the form of earnings credits, on the balance held to satisfy its contractual clearing balance. It uses these credits to defray the cost of the Federal Reserve services it uses, such as check clearing and wire transfers of funds and securities. If the depository institution fails to satisfy its contractual requirement, the deficiency is subject to a charge. - Excess Reserve Balances A depository institution may hold balances at its Federal Reserve Bank in addition to those it must hold to meet its reserve balance requirement and its contractual clearing balance; these balances are called excess reserve balances (or excess reserves). In general, a depository institution attempts to keep excess reserve balances at low levels because balances at the Federal Reserve do not earn interest. However, a depository institution may aim to hold some positive excess reserve balances at the end of the day as additional protection against an overnight overdraft in its account or the risk of failing to hold enough balances to satisfy its reserve or clearing balance requirement. This desired cushion of balances can vary considerably from day to day, depending in part on the volume and uncertainty about payments f lowing through the institution’s account. The daily demand for excess reserve balances is the least-predictable component of the demand for balances. (See table 3.1 for data on required reserve balances, contractual clearing balances, and excess reserve balances.) Measures of aggregate balances, 2001–2004 Billions of dollars; annual averages of daily data Required reserve balances Contractual clearing balances Excess reserve balances Supply of Federal Reserve Balances The supply of Federal Reserve balances to depository institutions comes from three sources: the Federal Reserve’s portfolio of securities and repurchase agreements; loans from the Federal Reserve through its discount window facility; and certain other items on the Federal Reserve’s balance sheet known as autonomous factors. The most important source of balances to depository institutions is the Federal Reserve’s portfolio of securities. The Federal Reserve buys and sells securities either on an outright (also called permanent) basis or temporarily in the form of repurchase agreements and reverse repurchase agreements. Discount Window Lending The supply of Federal Reserve balances increases when depository institutions borrow from the Federal Reserve’s discount window. Access to discount window credit is established by rules set by the Board of Governors, and loans are made at interest rates set by the Reserve Banks and approved by the Board. Depository institutions decide to borrow based on the level of the lending rate and their liquidity needs. The supply of balances can vary substantially from day to day because of movements in other items on the Federal Reserve’s balance sheet. These so-called autonomous factors are generally outside the Federal Reserve’s direct day-to-day control. The most important of these factors are Federal Reserve notes, the Treasury’s balance at the Federal Reserve, and Federal Reserve float. Consolidated balance sheet of the Federal Reserve Banks, December 31, 2004 Millions of dollars |Federal Reserve notes|| |Reverse repurchase agreements|| |Balance, U.S. Treasury account|| |Other liabilities and capital|| |All other assets|| |Balances, all depository institutions|| Controlling the Federal Funds Rate The Federal Reserve’s conduct of open market operations, its policies related to required reserves and contractual clearing balances, and its lending through the discount window all play important roles in keeping the federal funds rate close to the FOMC’s target rate. Open market operations are the most powerful and often-used tool for controlling the funds rate. These operations, which are arranged nearly every business day, are designed to bring the supply of Federal Reserve balances in line with the demand for those balances at the FOMC’s target rate. Required reserve balances and contractual clearing balances facilitate the conduct of open market operations by creating a predictable demand for Federal Reserve balances. If, even after an open market operation is arranged, the supply of balances falls short of demand, then discount window lending provides a mechanism for expanding the supply of balances to contain pressures on the funds rate. Open Market Operations In theory, the Federal Reserve could conduct open market operations by purchasing or selling any type of asset. In practice, however, most assets cannot be traded readily enough to accommodate open market operations. For open market operations to work effectively, the Federal Reserve must be able to buy and sell quickly, at its own convenience, in whatever volume may be needed to keep the federal funds rate at the target level. These conditions require that the instrument it buys or sells be traded in a broad, highly active market that can accommodate the transactions without distortions or disruptions to the market itself. Composition of the Federal Reserve’s Portfolio The overall size of the Federal Reserve’s holdings of Treasury securities depends principally on the growth of Federal Reserve notes; however, the amounts and maturities of the individual securities held depends on the FOMC’s preferences for liquidity. The Federal Reserve has guidelines that limit its holdings of individual Treasury securities to a percentage of the total amount outstanding. These guidelines are designed to help the Federal Reserve manage the liquidity and average maturity of the System portfolio. The percentage limits under these guidelines are larger for shorter-dated issues than longer-dated ones. Consequently, a sizable share of the Federal Reserve’s holdings is held in Treasury securities with remaining maturities of one year or less. This structure provides the Federal Reserve with the ability to alter the composition of its assets quickly when developments warrant. At the end of 2004, the Federal Reserve’s holdings of Treasury securities were about evenly weighted between those with maturities of one year or less and those with maturities greater than one year U.S. Treasury securities held in the Federal Reserve’s open market account, December 31, 2004 Billions of dollars |Remaining maturity||U.S. Treasury securities| |1 year or less|| |More than 1 year to 5 years|| |More than 5 years to 10 years|| |More than 10 years|| Types of Credit In ordinary circumstances, the Federal Reserve extends discount window credit to depository institutions under the primary, secondary, and seasonal credit programs. The rates charged on loans under each of these programs are established by each Reserve Bank’s board of directors every two weeks, subject to review and determination by the Board of Governors. The rates for each of the three lending programs are the same at all Reserve Banks, except occasionally for very brief periods following the Board’s action to adopt a requested rate change. The Federal Reserve also has the authority under the Federal Reserve Act to extend credit to entities that are not depository institutions in “unusual and exigent circumstances”; however, such lending has not occurred since the 1930s. Primary credit is available to generally sound depository institutions on a very short-term basis, typically overnight. To assess whether a depository institution is in sound financial condition, its Reserve Bank regularly reviews the institution’s condition, using supervisory ratings and data on adequacy of the institution’s capital. Depository institutions are not required to seek alternative sources of funds before requesting occasional advances of primary credit, but primary credit is expected to be used as a backup, rather than a regular, source of funding. The rate on primary credit has typically been set 1 percentage point above the FOMC’s target federal funds rate, but the spread can vary depending on circumstances. Because primary credit is the Federal Reserve’s main dis count window program, the Federal Reserve at times uses the term discount rate specifically to mean the primary credit rate. Reserve Banks ordinarily do not require depository institutions to provide reasons for requesting very short-term primary credit. Borrowers are asked to provide only the minimum information necessary to process a loan, usually the requested amount and term of the loan. If a pattern of borrowing or the nature of a particular borrowing request strongly indicates that a depository institution is not generally sound or is using primary credit as a regular rather than a backup source of funding, a Reserve Bank may seek additional information before deciding whether to extend the loan. Primary credit may be extended for longer periods of up to a few weeks if a depository institution is in generally sound financial condition and cannot obtain temporary funds in the market at reasonable terms. Large and medium-sized institutions are unlikely to meet this test. Secondary credit is available to depository institutions that are not eligible for primary credit. It is extended on a very short-term basis, typically overnight. Ref lecting the less-sound financial condition of borrowers of secondary credit, the rate on secondary credit has typically been 50 basis points above the primary credit rate, although the spread can vary as circumstances warrant. Secondary credit is available to help a depository institution meet backup liquidity needs when its use is consistent with the borrowing institution’s timely return to a reliance on market sources of funding or with the orderly resolution of a troubled institution’s difficulties. Secondary credit may not be used to fund an expansion of the borrower’s assets. Loans extended under the secondary credit program entail a higher level of Reserve Bank administration and oversight than loans under the primary credit program. A Reserve Bank must have sufficient information about a borrower’s financial condition and reasons for borrowing to ensure that an extension of secondary credit would be consistent with the purpose of the facility. Moreover, under the Federal Deposit Insurance Corporation Improvement Act of 1991, extensions of Federal Reserve credit to an FDIC-insured depository institution that has fallen below minimum capital standards are generally limited to 60 days in any 120-day period or, for the most severely undercapitalized, to only five days. The Federal Reserve’s seasonal credit program is designed to help small depository institutions manage significant seasonal swings in their loans and deposits. Seasonal credit is available to depository institutions that can demonstrate a clear pattern of recurring swings in funding needs throughout the year—usually institutions in agricultural or tourist areas. Borrowing longer-term funds from the discount window during periods of seasonal need allows institutions to carry fewer liquid assets during the rest of the year and make more funds available for local lending. The seasonal credit rate is based on market interest rates. It is set on the first business day of each two-week reserve maintenance period as the average of the effective federal funds rate and the interest rate on three-month certificates of deposit over the previous reserve maintenance period. The Federal Reserve in the International Sphere The U.S. economy and the world economy are linked in many ways. Economic developments in this country have a major influence on production, employment, and prices beyond our borders; at the same time, developments abroad significantly affect our economy. The U.S. dollar, which is the currency most used in international transactions, constitutes more than half of other countries’ official foreign exchange reserves. U.S. banks abroad and foreign banks in the United States are important actors in international financial markets. The activities of the Federal Reserve and the international economy inf luence each other. Therefore, when deciding on the appropriate monetary policy for achieving basic economic goals, the Board of Governors and the FOMC consider the record of U.S. international transactions, movements in foreign exchange rates, and other international economic developments. And in the area of bank supervision and regulation, innovations in international banking require continual assessments of, and occasional modifications in, the Federal Reserve’s procedures and regulations. The Federal Reserve formulates policies that shape, and are shaped by, international developments. It also participates directly in international affairs. For example, the Federal Reserve occasionally undertakes foreign exchange transactions aimed at inf luencing the value of the dollar in relation to foreign currencies, primarily with the goal of stabilizing disorderly market conditions. These transactions are undertaken in close and continuous consultation and cooperation with the U.S. Treasury. The Federal Reserve also works with the Treasury and other government agencies on various aspects of international financial policy. It participates in a number of international organizations and forums and is in almost continuous contact with other central banks on subjects of mutual concern. The Federal Reserve’s actions to adjust U.S. monetary policy are designed to attain basic objectives for the U.S. economy. But any policy move also inf luences, and is inf luenced by, international developments. For example monetary policy actions inf luence exchange rates. The dollar’s exchange value in terms of other currencies is therefore one of the channels through which U.S. monetary policy affects the U.S. economy. If Federal Reserve actions raised U.S. interest rates, for instance, the foreign exchange value of the dollar generally would rise. An increase in the foreign exchange value of the dollar, in turn, would raise the price in foreign currency of U.S. goods traded on world markets and lower the dollar price of goods imported into the United States. By restraining exports and boosting imports, these developments could lower output and price levels in the economy. In contrast, an increase in interest rates in a foreign country could raise worldwide demand for assets denominated in that country’s currency and thereby reduce the dollar’s value in terms of that currency. Other things being equal, U.S. output and price levels would tend to increase—just the opposite of what happens when U.S. interest rates rise. Foreign Currency Operations The Federal Reserve conducts foreign currency operations—the buying and selling of dollars in exchange for foreign currency—under the direction of the FOMC, acting in close and continuous consultation and cooperation with the U.S. Treasury, which has overall responsibility for U.S. international financial policy. The manager of the System Open Market Account at the Federal Reserve Bank of New York acts as the agent for both the FOMC and the Treasury in carrying out foreign currency operations. Since the late 1970s, the U.S. Treasury and the Federal Reserve have conducted almost all foreign currency operations jointly and equally. Intervention operations involving dollars affect the supply of Federal Reserve balances to U.S. depository institutions, unless the Federal Reserve offsets the effect. A purchase of foreign currency by the Federal Reserve increases the supply of balances when the Federal Reserve credits the account of the seller’s depository institution at the Federal Reserve. Conversely, a sale of foreign currency by the Federal Reserve decreases the supply of balances. The Federal Reserve offsets, or “sterilizes,” the effects of intervention on Federal Reserve balances through open market operations; otherwise, the intervention could cause the federal funds rate to move away from the target set by the FOMC. US Foreign Currency Resources The main source of foreign currencies used in U.S. intervention operations currently is U.S. holdings of foreign exchange reserves. At the end of June 2004, the United States held foreign currency reserves valued at $40 billion. Of this amount, the Federal Reserve held foreign currency assets of $20 billion, and the Exchange Stabilization Fund of the Treasury held the rest. The U.S. monetary authorities have also arranged swap facilities with foreign monetary authorities to support foreign currency operations. These facilities, which are also known as reciprocal currency arrangements, provide short-term access to foreign currencies. A swap transaction involves both a spot (immediate delivery) transaction, in which the Federal Reserve transfers dollars to another central bank in exchange for foreign currency, and a simultaneous forward (future delivery) transaction, in which the two central banks agree to reverse the spot transaction, typically no later than three months in the future. The repurchase price incorporates a market rate of return in each currency of the transaction. The original purpose of swap arrangements was to facilitate a central bank’s support of its own currency in case of undesired downward pressure in foreign exchange markets. Drawings on swap arrangements were common in the 1960s but over time declined in frequency as policy authorities came to rely more on foreign exchange reserve balances to finance currency operations. Federal Reserve standing reciprocal currency arrangements, June 30, 2004 Millions of U.S. dollars |Institution||Amount of facility||Amount drawn| |Bank of Canada|| |Bank of Mexico|| |Temporary reciprocal currency arrangements of September 2001| |European Central Bank|| |Bank of England|| |Bank of Canada|| The Federal Reserve is interested in the international activities of banks, not only because it functions as a bank supervisor but also because such activities are often close substitutes for domestic banking activities and need to be monitored carefully to help interpret U.S. monetary and credit conditions. Moreover, international banking institutions are important vehicles for capital f lows into and out of the United States. Where international banking activities are conducted depends on such factors as the business needs of customers, the scope of operations permitted by a country’s legal and regulatory framework, and tax considerations. The international activities of U.S.-chartered banks include lending to and accepting deposits from foreign customers at the banks’ U.S. offices and engaging in other financial transactions with foreign counterparts. However, the bulk of the international business of U.S.-chartered banks takes place at their branch offices located abroad and at their foreign-incorporated subsidiaries, usually wholly owned. Much of the activity of foreign branches and subsidiaries of U.S. banks has been Eurocurrency1 business—that is, taking deposits and lending in currencies other than that of the country in which the banking office is located. Increasingly, U.S. banks are also offering a range of sophisticated financial products to residents of other countries and to U.S. firms abroad. The international role of U.S. banks has a counterpart in foreign bank operations in the United States. U.S. offices of foreign banks actively participate as both borrowers and investors in U.S. domestic money markets and are active in the market for loans to U.S. businesses. (See chapter 5 for a discussion of the Federal Reserve’s supervision and regulation of the international activities of U.S. banks and the U.S. activities of foreign banks.) International banking by both U.S.-based and foreign banks facilitates the holding of Eurodollar deposits—dollar deposits in banking offices outside the United States—by nonbank U.S. entities. Similarly, Eurodollar loans—dollar loans from banking offices outside the United States—can be an important source of credit for U.S. companies (banks and non-banks). Because they are close substitutes for deposits at domestic banks, Eurodollar deposits of nonbank U.S. entities at foreign branches of U.S. banks are included in the U.S. monetary aggregate M3; Eurodollar deposits of nonbank U.S. entities at all other banking offices in the United Kingdom and Canada are also included in M3. Supervision and Regulation The Federal Reserve has supervisory and regulatory authority over a wide range of financial institutions and activities. It works with other federal and state supervisory authorities to ensure the safety and soundness of financial institutions, stability in the financial markets, and fair and equitable treatment of consumers in their financial transactions. As the U.S. central bank, the Federal Reserve also has extensive and well-established relationships with the central banks and financial supervisors of other countries, which enables it to coordinate its actions with those of other countries when managing international financial crises and supervising institutions with a substantial international presence. - Bank holding companies, including diversified financial holding companies formed under the Gramm-Leach-Bliley Act of 1999 and foreign banks with U.S. operations - State-chartered banks that are members of the Federal Reserve System (state member banks) - Foreign branches of member banks - Edge and agreement corporations, through which U.S. banking organizations may conduct international banking activities - U.S. state-licensed branches, agencies, and representative offices of foreign banks - Nonbanking activities of foreign banks Although the terms bank supervision and bank regulation are often used interchangeably, they actually refer to distinct, but complementary, activities. Bank supervision involves the monitoring, inspecting, and examining of banking organizations to assess their condition and their compliance with relevant laws and regulations. When a banking organization within the Federal Reserve’s supervisory jurisdiction is found to be noncompliant or to have other problems, the Federal Reserve may use its supervisory authority to take formal or informal action to have the organization correct the problems. Bank regulation entails issuing specific regulations and guidelines governing the operations, activities, and acquisitions of banking organizations. Responsibilities of the Federal Banking Agencies The Federal Reserve shares supervisory and regulatory responsibilities for domestic banking institutions with the Office of the Comptroller of the Currency (OCC), the Federal Deposit Insurance Corporation (FDIC), and the Office of Thrift Supervision (OTS) at the federal level, and with the banking departments of the various states. The primary supervisor of a domestic banking institution is generally determined by the type of institution that it is and the governmental authority that granted it permission to commence business (commonly referred to as a charter). Banks that are chartered by a state government are referred to as state banks; banks that are chartered by the OCC, which is a bureau of the Department of the Treasury, are referred to as national banks. The Federal Reserve has primary supervisory authority for state banks that elect to become members of the Federal Reserve System (state member banks). State banks that are not members of the Federal Reserve System (state nonmember banks) are supervised by the FDIC. In addition to being supervised by the Federal Reserve or FDIC, all state banks are supervised by their chartering state. The OCC supervises national banks. All national banks must become members of the Federal Reserve System. This dual federal–state banking system has evolved partly out of the complexity of the U.S. financial system, with its many kinds of depository institutions and numerous chartering authorities. It has also resulted from a wide variety of federal and state laws and regulations designed to remedy problems that the U.S. commercial banking system has faced over its history. Banks are often owned or controlled by another company. These companies are referred to as bank holding companies. The Federal Reserve has supervisory authority for all bank holding companies, regardless of whether the subsidiary bank of the holding company is a national bank, state member bank, or state nonmember bank. Savings associations, another type of depository institution, have historically focused on residential mortgage lending. The OTS, which is a bureau of the Department of the Treasury, charters and supervises federal savings associations and also supervises companies that own or control a savings association. These companies are referred to as thrift holding companies. Federal Financial Institutions Examination Council To promote consistency in the examination and supervision of banking organizations, in 1978 Congress created the Federal Financial Institutions Examination Council (FFIEC). The FFIEC is composed of the chairpersons of the FDIC and the National Credit Union Administration, the comptroller of the currency, the director of the OTS, and a governor of the Federal Reserve Board appointed by the Board Chairman. The FFIEC’s purposes are to prescribe uniform federal principles and standards for the examination of depository institutions, to promote coordination of bank supervision among the federal agencies that regulate financial institutions, and to encourage better coordination of federal and state regulatory activities. Through the FFIEC, state and federal regulatory agencies may exchange views on important regulatory issues. Among other things, the FFIEC has developed uniform financial reports for federally supervised banks to file with their federal regulator. The main objective of the supervisory process is to evaluate the overall safety and soundness of the banking organization. This evaluation includes an assessment of the organization’s risk-management systems, financial condition, and compliance with applicable banking laws and regulations. The supervisory process entails both on-site examinations and inspections and off-site surveillance and monitoring. Typically, state member banks must have an on-site examination at least once every twelve months. Banks that have assets of less than $250 million and that meet certain management, capital, and other criteria may be examined once every eighteen months. The Federal Reserve coordinates its examinations with those of the bank’s chartering state and may alternate exam cycles with the bank’s state supervisor. The Federal Reserve generally conducts an annual inspection of large bank holding companies (companies with consolidated assets of $1 billion or greater) and smaller bank holding companies that have significant nonbank assets. Small, noncomplex bank holding companies are subject to a special supervisory program that permits a more f lexible approach that relies on off-site monitoring and the supervisory ratings of the lead subsidiary depository institution. When evaluating the consolidated condition of the holding company, Federal Reserve examiners rely heavily on the results of the examination of the company’s subsidiary banks by the primary federal or state banking authority, to minimize duplication of efforts and reduce burden on the banking organization. With the largest banking organizations growing in both size and complexity, the Federal Reserve has moved towards a risk-focused approach to supervision that is more a continuous process than a point-in-time examination. The goal of the risk-focused supervision process is to identify the greatest risks to a banking organization and assess the ability of the organization’s management to identify, measure, monitor, and control these risks. Under the risk-focused approach, Federal Reserve examiners focus on those business activities that may pose the greatest risk to the organization. Supervisory Rating System The results of an on-site examination or inspection are reported to the board of directors and management of the bank or holding company in a report of examination or inspection, which includes a confidential supervisory rating of the financial condition of the bank or holding company. The supervisory rating system is a supervisory tool that all of the federal and state banking agencies use to communicate to banking organizations the agency’s assessment of the organization and to identify institutions that raise concern or require special attention. This rating system for banks is commonly referred to as CAMELS, which is an acronym for the six components of the rating system: capital adequacy, asset quality, management and administration, earnings, liquidity, and sensitivity to market risk. The Federal Reserve also uses a supervisory rating system for bank holding companies, referred to as RFI/C(D), that takes into account risk management, financial condition, potential impact of the parent company and nondepository subsidiaries on the affiliated depository institutions, and the CAMELS rating of the affiliated depository institutions. Financial Regulatory Reports In carrying out their supervisory activities, Federal Reserve examiners and supervisory staff rely on many sources of financial and other information about banking organizations, including reports of recent examinations and inspections, information published in the financial press and elsewhere, and the standard financial regulatory reports filed by institutions. In its ongoing off-site supervision of banks and bank holding companies, the Federal Reserve uses automated screening systems to identify organizations with poor or deteriorating financial profiles and to help detect adverse trends developing in the banking industry. Accounting Policy and Disclosure Enhanced market discipline is an important component of bank supervision. Accordingly, the Federal Reserve plays a significant role in promoting sound accounting policies and meaningful public disclosure by financial institutions. Umbrella Supervision and Coordination with Other Functional Regulators In addition to owning banks, bank holding companies also may own broker-dealers engaged in securities activities or insurance companies. Indeed, one of the primary purposes of the Gramm-Leach-Bliley Act (GLB Act), enacted in 1999, was to allow banks, securities broker-dealers, and insurance companies to affiliate with each other through the bank holding company structure. To take advantage of the expanded affiliations permitted by the GLB Act, a bank holding company must meet certain capital, managerial, and other requirements and must elect to become a “financial holding company.” When a bank holding company or financial holding company owns a subsidiary broker-dealer or insurance company, the Federal Reserve seeks to coordinate its supervisory responsibilities with those of the subsidiary’s functional regulator—the Securities and Exchange Commission (SEC) in the case of a broker-dealer and the state insurance authorities in the case of an insurance company. The Federal Reserve’s role as the supervisor of a bank holding company or financial holding company is to review and assess the consolidated organization’s operations, risk-management systems, and capital adequacy to ensure that the holding company and its nonbank subsidiaries do not threaten the viability of the company’s depository institutions. In this role, the Federal Reserve serves as the “umbrella supervisor” of the consolidated organization. In fulfilling this role, the Federal Reserve relies to the fullest extent possible on information and analysis provided by the appropriate supervisory authority of the company’s bank, securities, or insurance subsidiaries. To enhance domestic security following the terrorist attacks of September 11, 2001, Congress passed the USA Patriot Act, which contained provisions for fighting international money laundering and for blocking terrorists’ access to the U.S. financial system. The provisions of the act that affect banking organizations were generally set forth as amendments to the Bank Secrecy Act (BSA), which was enacted in 1970. The BSA requires financial institutions doing business in the United States to report large currency transactions and to retain certain records, including information about persons involved in large currency transactions and about suspicious activity related to possible violations of federal law, such as money laundering, terrorist financing, and other financial crimes. The BSA also prohibits the use of foreign bank accounts to launder illicit funds or to avoid U.S. taxes and statutory restrictions. After September 11, 2001, the Federal Reserve implemented a number of measures to promote the continuous operation of financial markets and to ensure the continuity of Federal Reserve operations in the event of a future crisis. The process of strengthening the resilience of the private-sector financial system—focusing on organizations with systemic elements—is largely accomplished through the existing regulatory framework. In 2003, responding to the need for further guidance for financial institutions in this area, the Federal Reserve Board, the OCC, and the SEC issued the “Interagency Paper on Sound Practices to Strengthen the Resilience of the U.S. Financial System.” The paper sets forth sound practices for the financial industry to ensure a rapid recovery of the U.S. financial system in the event of a wide-scale disruption that may include loss or inaccessibility of staff. Many of the concepts in the paper amplify long-standing and well-recognized principles relating to safeguarding information and the ability to recover and resume essential financial services. Other Supervisory Activities The Federal Reserve conducts on-site examinations of banks to ensure compliance with consumer protection laws (discussed in chapter 6) as well as compliance in other areas, such as fiduciary activities, transfer agency, securities clearing agency, government and municipal securities dealing, securities credit lending, and information technology. Further, in light of the importance of information technology to the safety and soundness of banking organizations, the Federal Reserve has the authority to examine the operations of certain independent organizations that provide information technology services to supervised banking organizations. If the Federal Reserve determines that a state member bank or bank holding company has problems that affect the institution’s safety and soundness or is not in compliance with laws and regulations, it may take a supervisory action to ensure that the institution undertakes corrective measures. Typically, such findings are communicated to the management and directors of a banking organization in a written report. The management and directors are then asked to address all identified problems voluntarily and to take measures to ensure that the problems are corrected and will not recur. Most problems are resolved promptly after they are brought to the attention of an institution’s management and directors. In some situations, however, the Federal Reserve may need to take an informal supervisory action, requesting that an institution adopt a board resolution or agree to the provisions of a memorandum of understanding to address the problem. Supervision of International Operations of U.S. Banking Organizations The Federal Reserve also has supervisory and regulatory responsibility for the international operations of member banks (that is, national and state member banks) and bank holding companies. These responsibilities include - Authorizing the establishment of foreign branches of national banks and state member banks and regulating the scope of their activities; - Chartering and regulating the activities of Edge and agreement corporations, which are specialized institutions used for international and foreign business; - Authorizing foreign investments of member banks, Edge and agreement corporations, and bank holding companies and regulating the activities of foreign firms acquired by such investors; and - Establishing supervisory policy and practices regarding foreign lending by state member banks. Under federal law, U.S. banking organizations generally may conduct a wider range of activities abroad than they may conduct in this country. Supervision of U.S. Activities of Foreign Banking Organizations Although foreign banks have been operating in the United States for more than a century, before 1978 the U.S. branches and agencies of these banks were not subject to supervision or regulation by any federal banking agency. When Congress enacted the International Banking Act of 1978 (IBA), it created a federal regulatory structure for the activities of foreign banks with U.S. branches and agencies. Supervision of Transactions with Affiliates As part of the supervisory process, the Federal Reserve also evaluates transactions between a bank and its affiliates to determine the effect of the transactions on the bank’s condition and to ascertain whether the transactions are consistent with sections 23A and 23B of the Federal Reserve Act, as implemented by the Federal Reserve Board’s Regulation W. Since the GLB Act increased the range of affiliations permitted to banking organizations, sections 23A and 23B play an increasingly important role in limiting the risk to depository institutions from these broader affiliations. Among other things, section 23A prohibits a bank from purchasing an affiliate’s low-quality assets. In addition, it limits a bank’s loans and other extensions of credit to any single affiliate to 10 percent of the bank’s capital and surplus, and it limits loans and other extensions of credit to all affiliates in the aggregate to 20 percent of the bank’s capital and surplus. Section 23B requires that all transactions between a bank and its affiliates be on terms that are substantially the same, or at least as favorable, as those prevailing at the time for comparable transactions with nonaffiliated companies. The Federal Reserve Board is the only banking agency that has the authority to exempt any bank from these requirements. During the course of an examination, examiners review a banking organization’s intercompany transactions for compliance with these statutes and Regulation W. As a bank regulator, the Federal Reserve establishes standards designed to ensure that banking organizations operate in a safe and sound manner and in accordance with applicable law. These standards may take the form of regulations, rules, policy guidelines, or supervisory interpretations and may be established under specific provisions of a law or under more general legal authority. Regulatory standards may be either restrictive (limiting the scope of a banking organization’s activities) or permissive (authorizing banking organizations to engage in certain activities). Acquisitions and Mergers Under the authority assigned to the Federal Reserve by the Bank Holding Company Act of 1956 as amended, the Bank Merger Act of 1960, and the Change in Bank Control Act of 1978, the Federal Reserve Board maintains broad authority over the structure of the banking system in the United States. The Bank Holding Company Act assigned to the Federal Reserve primary responsibility for supervising and regulating the activities of bank holding companies. Through this act, Congress sought to achieve two basic objectives: (1) to avoid the creation of a monopoly or the restraint of trade in the banking industry through the acquisition of additional banks by bank holding companies and (2) to keep banking and commerce separate by restricting the nonbanking activities of bank holding companies. Historically, bank holding companies could engage only in banking activities and other activities that the Federal Reserve determined to be closely related to banking. But since the passage of the GLB Act, a bank holding company that qualifies to become a financial holding company may engage in a broader range of financially related activities, including full-scope securities underwriting and dealing, insurance underwriting and sales, and merchant banking. A bank holding company seeking financial holding company status must file a written declaration with the Federal Reserve System, certifying that the company meets the capital, managerial, and other requirements to be a financial holding company. Under the Bank Holding Company Act, a firm that seeks to become a bank holding company must first obtain approval from the Federal Reserve. The act defines a bank holding company as any company that directly or indirectly owns, controls, or has the power to vote 25 percent or more of any class of the voting shares of a bank; controls in any manner the election of a majority of the directors or trustees of a bank; or is found to exercise a controlling inf luence over the management or policies of a bank. A bank holding company must obtain the approval of the Federal Reserve before acquiring more than 5 percent of the shares of an additional bank or bank holding company. All bank holding companies must file certain reports with the Federal Reserve System. When considering applications to acquire a bank or a bank holding company, the Federal Reserve is required to take into account the likely effects of the acquisition on competition, the convenience and needs of the communities to be served, the financial and managerial resources and future prospects of the companies and banks involved, and the effectiveness of the company’s policies to combat money laundering. In the case of an interstate bank acquisition, the Federal Reserve also must consider certain other factors and may not approve the acquisition if the resulting organization would control more than 10 percent of all deposits held by insured depository institutions. When a foreign bank seeks to acquire a U.S. bank, the Federal Reserve also must consider whether the foreign banking organization is subject to comprehensive supervision or regulation on a consolidated basis by its home-country supervisor. Another responsibility of the Federal Reserve is to act on proposed bank mergers when the resulting institution would be a state member bank. The Bank Merger Act of 1960 sets forth the factors to be considered in evaluating merger applications. These factors are similar to those that must be considered in reviewing bank acquisition proposals by bank holding companies. To ensure that all merger applications are evaluated in a uniform manner, the act requires that the responsible agency request reports from the Department of Justice and from the other approving banking agencies addressing the competitive impact of the transaction. Other Changes in Bank Control The Change in Bank Control Act of 1978 authorizes the federal bank regulatory agencies to deny proposals by a single “person” (which includes an individual or an entity), or several persons acting in concert, to acquire control of an insured bank or a bank holding company. The Federal Reserve is responsible for approving changes in the control of bank holding companies and state member banks, and the FDIC and the OCC are responsible for approving changes in the control of insured state nonmember and national banks, respectively. In considering a proposal under the act, the Federal Reserve must review several factors, including the financial condition, competence, experience, and integrity of the acquiring person or group of persons; the effect of the transaction on competition; and the adequacy of the information provided by the acquiring party. Formation and Activities of Financial Holding Companies As authorized by the GLB Act, the Federal Reserve Board’s regulations allow a bank holding company or a foreign banking organization to become a financial holding company and engage in an expanded array of financial activities if the company meets certain capital, managerial, and other criteria. Permissible activities for financial holding companies include conducting securities underwriting and dealing, serving as an insurance agent and underwriter, and engaging in merchant banking. Other permissible activities include those that the Federal Reserve Board, after consulting with the Secretary of the Treasury, determines to be financial in nature or incidental to financial activities. Financial holding companies also may engage to a limited extent in a nonfinancial activity if the Board determines that the activity is complementary to one or more of the company’s financial activities and would not pose a substantial risk to the safety or soundness of depository institutions or the financial system. Capital Adequacy Standards A key goal of banking regulation is to ensure that banks maintain sufficient capital to absorb reasonably likely losses. In 1989, the federal banking regulators adopted a common standard for measuring capital adequacy that is broadly based on the risks of an institution’s investments. This common standard, in turn, was based on the 1988 agreement “International Convergence of Capital Measurement and Capital Standards” (commonly known as the Basel Accord) developed by the Basel Committee on Banking Supervision. This committee, which is associated with the Bank for International Settlements headquartered in Switzerland, is composed of representatives of the central banks or bank supervisory authorities from Belgium, Canada, France, Germany, Italy, Japan, Luxembourg, the Netherlands, Spain, Sweden, Switzerland, the United Kingdom, and the United States. Financial Disclosures by State Member Banks State member banks that issue securities registered under the Securities Exchange Act of 1934 must disclose certain information of interest to investors, including annual and quarterly financial reports and proxy statements. By statute, the Federal Reserve administers these requirements and has adopted financial disclosure regulations for state member banks that are substantially similar to the SEC’s regulations for other public companies. The Securities Exchange Act of 1934 requires the Federal Reserve to regulate the extension of credit used in connection with the purchase of securities. Through its regulations, the Board establishes the minimum amount the buyer must put up when purchasing a security. This minimum amount is known as the margin requirement. In fulfilling its responsibility under the act, the Federal Reserve limits the amount of credit that may be provided by securities brokers and dealers (Regulation T) and the amount of securities credit extended by banks and other lenders (Regulation U). These regulations generally apply to credit-financed purchases of securities traded on securities exchanges and certain securities traded over the counter when the credit is collateralized by such securities. In addition, Regulation X prohibits borrowers who are subject to U.S. laws from obtaining such credit overseas on terms more favorable than could be obtained from a domestic lender. Consumer and Community Affairs The number of federal laws intended to protect consumers in credit and other financial transactions has been growing since the late 1960s. Congress has assigned to the Federal Reserve the duty of implementing many of these laws to ensure that consumers receive comprehensive information and fair treatment. Among the Federal Reserve’s responsibilities in this area are - Writing and interpreting regulations to carry out many of the major consumer protection laws, - Reviewing bank compliance with the regulations, - Investigating complaints from the public about state member banks’ compliance with consumer protection laws, - Addressing issues of state and federal jurisdiction, - Testifying before Congress on consumer protection issues, and - Conducting community development activities. In carrying out these responsibilities, the Federal Reserve is advised by its Consumer Advisory Council, whose members represent the interests of consumers, community groups, and creditors nationwide. Meetings of the council, which take place three times a year at the Federal Reserve Board in Washington, D.C., are open to the public. Most financial transactions involving consumers are covered by consumer protection laws. These include transactions involving credit, charge, and debit cards issued by financial institutions and credit cards issued by retail establishments; automated teller machine transactions and other electronic fund transfers; deposit account transactions; automobile leases; mortgages and home equity loans; and lines of credit and other unsecured credit. Educating Consumers about Consumer Protection Laws Well-educated consumers are the best consumer protection in the market. They know their rights and responsibilities, and they use the information provided in disclosures to shop and compare. The Federal Reserve Board maintains a consumer information web site with educational materials related to the consumer protection regulations developed by the Board. In addition, the Federal Reserve staff uses consumer surveys and focus groups to learn more about what issues are important to consumers and to develop and test additional educational resources. Enforcing Consumer Protection Laws The Federal Reserve has a comprehensive program to examine financial institutions and other entities that it supervises to ensure compliance with consumer protection laws and regulations. Its enforcement responsibilities generally extend only to state-chartered banks that are members of the Federal Reserve System and to certain foreign banking organizations. Other federal regulators are responsible for examining banks, thrift institutions, and credit unions under their jurisdictions and for taking enforcement action. Consumer Complaint Program The Federal Reserve responds to inquiries and complaints from the public about the policies and practices of financial institutions involving consumer protection issues. Each Reserve Bank has staff whose primary responsibility is to investigate consumer complaints about state member banks and refer complaints about other institutions to the appropriate regulatory agencies. The Federal Reserve’s responses not only address the concerns raised but also educate consumers about financial matters. Community affairs programs at the Board and the twelve Federal Reserve Banks promote community development and fair and impartial access to credit. Community affairs offices at the Board and Reserve Banks engage in a wide variety of activities to help financial institutions, community based organizations, government entities, and the public understand and address financial services issues that affect low- and moderate-income people and geographic regions. Each office responds to local needs in its District and establishes its own programs to - Foster depository institutions’ active engagement in providing credit and other banking services to their entire communities, particularly traditionally underserved markets; - Encourage mutually beneficial cooperation among community organizations, government agencies, financial institutions, and other community development practitioners; - Develop greater public awareness of the benefits and risks of financial products and of the rights and responsibilities that derive from community investment and fair lending regulations; and - Promote among policy makers, community leaders, and private-sector decision makers a better understanding of the practices, processes, and resources that result in successful community development programs. Each Federal Reserve Bank develops specific products and services to meet the informational needs of its region. The community affairs offices issue a wide array of publications, sponsor a variety of public forums, and provide technical information on community and economic development and on fair and equal access to credit and other banking services. Consumer Protection Laws - Fair Housing Act (1968) Prohibits discrimination in the extension of housing credit on the basis of race, color, religion, national origin, sex, handicap, or family status. - Truth in Lending Act (1968) Requires uniform methods for computing the cost of credit and for disclosing credit terms. Gives borrowers the right to cancel, within three days, certain loans secured by their residences. Prohibits the unsolicited issuance of credit cards and limits cardholder liability for unauthorized use. Also imposes limitations on home equity loans with rates or fees above a specified threshold. - Fair Credit Reporting Act (1970) Protects consumers against inaccurate or misleading information in credit files maintained by credit-reporting agencies; requires credit reporting agencies to allow credit applicants to correct erroneous reports. - Flood Disaster Protection Act of 1973 Requires flood insurance on property in a flood hazard area that comes under the National Flood Insurance Program. - Fair Credit Billing Act (1974) Specifies how creditors must respond to billing-error complaints from consumers; imposes requirements to ensure that creditors handle accounts fairly and promptly. Applies primarily to credit and charge card accounts (for example, store card and bank card accounts). Amended the Truth in Lending Act. - Equal Credit Opportunity Act (1974) Prohibits discrimination in credit transactions on several bases, including sex, marital status, age, race, religion, color, national origin, the receipt of public assistance funds, or the exercise of any right under the Consumer Credit Protection Act. Requires creditors to grant credit to qualified individuals without requiring co-signature by spouses, to inform unsuccessful applicants in writing of the reasons credit was denied, and to allow married individuals to have credit histories on jointly held accounts maintained in the names of both spouses. Also entitles a borrower to a copy of a real estate appraisal report. - Real Estate Settlement Procedures Act of 1974 Requires that the nature and costs of real estate settlements be dis¬closed to borrowers. Also protects borrowers against abusive practices, such as kickbacks, and limits the use of escrow accounts. - Home Mortgage Disclosure Act of 1975 Requires mortgage lenders to annually disclose to the public data about the geographic distribution of their applications, originations, and purchases of home-purchase and home-improvement loans and refinancings. Requires lenders to report data on the ethnicity, race, sex, income of applicants and borrowers, and other data. Also directs the Federal Financial Institutions Examination Council, of which the Federal Reserve is a member, to make summaries of the data available to the public. - Consumer Leasing Act of 1976 Requires that institutions disclose the cost and terms of consumer leases, such as automobile leases - Fair Debt Collection Practices Act (1977) Prohibits abusive debt collection practices. Applies to banks that function as debt collectors for other entities. - Community Reinvestment Act of 1977 Encourages financial institutions to help meet the credit needs of their entire communities, particularly low- and moderate-income neighborhoods. - Right to Financial Privacy Act of 1978 Protects bank customers from the unlawful scrutiny of their financial records by federal agencies and specifies procedures that government authorities must follow when they seek information about a customer’s financial records from a financial institution. - Electronic Fund Transfer Act (1978) Establishes the basic rights, liabilities, and responsibilities of consumers who use electronic fund transfer services and of financial institutions that offer these services. Covers transactions conducted at automated teller machines, at point-of-sale terminals in stores, and through telephone bill-payment plans and preauthorized transfers to and from a customer’s account, such as direct deposit of salary or Social Security payments. - Federal Trade Commission Improvement Act (1980) Authorizes the Federal Reserve to identify unfair or deceptive acts or practices by banks and to issue regulations to prohibit them. Using this authority, the Federal Reserve has adopted rules substantially similar to those adopted by the FTC that restrict certain practices in the collection of delinquent consumer debt, for example, practices related to late charges, responsibilities of cosigners, and wage assignments. - Expedited Funds Availability Act (1987) Specifies when depository institutions must make funds deposited by check available to depositors for withdrawal. Requires institutions to disclose to customers their policies on funds availability. - Women’s Business Ownership Act of 1988 Extends to applicants for business credit certain protections afforded consumer credit applicants, such as the right to an explanation for credit denial. Amended the Equal Credit Opportunity Act. - Fair Credit and Charge Card Disclosure Act of 1988 Requires that applications for credit cards that are sent through the mail, solicited by telephone, or made available to the public (for example, at counters in retail stores or through catalogs) contain information about key terms of the account. Amended the Truth in Lending Act. - Home Equity Loan Consumer Protection Act of 1988 Requires creditors to provide consumers with detailed information about open-end credit plans secured by the consumer’s dwelling. Also regulates advertising of home equity loans and restricts the terms of home equity loan plans - Truth in Savings Act (1991) Requires that depository institutions disclose to depositors certain information about their accounts—including the annual percentage yield, which must be calculated in a uniform manner—and prohibits certain methods of calculating interest. Regulates advertising of savings accounts. - Home Ownership and Equity Protection Act of 1994 Provides additional disclosure requirements and substantive limitations on home-equity loans with rates or fees above a certain percentage or amount. Amended the Truth in Lending Act. - Gramm-Leach-Bliley Act, title V, subpart A, Disclosure of Nonpublic Personal Information (1999) Describes the conditions under which a financial institution may disclose nonpublic personal information about consumers to nonaffiliated third parties, provides a method for consumers to opt out of information sharing with nonaffiliated third parties, and requires a financial institution to notify consumers about its privacy policies and practices. - Fair and Accurate Credit Transaction Act of 2003 Enhances consumers’ ability to combat identity theft, increases the accuracy of consumer reports, allows consumers to exercise greater control over the type and amount of marketing solicitations they receive, restricts the use and disclosure of sensitive medical information, and establishes uniform national standards in the regulation of consumer reporting. Amended the Fair Credit Reporting Act.
http://www.knowfinance.com/federal-reserve-system-fed/
13
53
When you solve a quadratic equation when you have been given a y-value and need to find all of the corresponding x-values. For example, if you had been given the quadratic y = x2 + 8 · x +10, and the y-value, y = 30, then solving the quadratic equation would mean finding all of the numerical values of x that work when you plug them into the equation: x2 + 8 · x +10 = 30. Note that solving this quadratic equation is the same as solving the quadratic equation: |x2 + 8 · x +10 - 30 = 30 - 30 ||(Subtract 30 from each side) |x2 + 8 · x - 20 = 0 Solving the quadratic equation x2 + 8 · x - 20 = 0 will give exactly the same values for x that solving the original quadratic equation, x2 + 8 · x +10 = 30, will give. The advantage of manipulating the quadratic equation to reduce one side of the equation to zero before attempting to find any values of x is that this manipulation creates a new quadratic equation that can be solved using some fairly standard techniques and formulas. Solving a polynomial equation is exactly the same kind of process as solving a quadratic equation, except that the quadratic might be replaced by a different kind of polynomial (such as a cubic or a quartic). The Number of Solutions of a Polynomial Equation A quadratic is a degree 2 polynomial. This means that the highest power of x that shows up in a quadratics formula is x2. The maximum number of solutions that a quadratic function can possibly have is 2. The maximum number of solutions that a polynomial equation can have is equal to the degree of the polynomial. It is possible for a polynomial equation to have fewer solutions (or none at all). The degree of the polynomial gives you the maximum number of solutions that are theoretically possible, not the actual number of solutions that will occur. Example: Solving a Polynomial Equation Graphically The graph given below shows the graph of the polynomial function: Use the graph to find all solutions of the polynomial equation: Graphically, as the polynomial equation is equal to zero the solutions of the polynomial will be the x-coordinates of the points where the graph of the polynomial: touches or crosses the x-axis. If you look carefully at the graph supplied above, the graph of the polynomial touches or cuts the graph at the following points: x = -2, x = -1, x = 2. The solutions of the polynomial equation are x = -2, x = -1 and x = 2.
http://www.algebra-tutoring.com/solving-quadratic-polynomial-equations-1.htm
13
76
|Ancient Indian Mathematics index||History Topics Index| The Vedic people entered India about 1500 BC from the region that today is Iran. The word Vedic describes the religion of these people and the name comes from their collections of sacred texts known as the Vedas. The texts date from about the 15th to the 5th century BC and were used for sacrificial rites which were the main feature of the religion. There was a ritual which took place at an altar where food, also sometimes animals, were sacrificed. The Vedas contain recitations and chants to be used at these ceremonies. Later prose was added called Brahmanas which explained how the texts were to be used in the ceremonies. They also tell of the origin and the importance of the sacrificial rites themselves. The Sulbasutras are appendices to the Vedas which give rules for constructing altars. If the ritual sacrifice was to be successful then the altar had to conform to very precise measurements. The people made sacrifices to their gods so that the gods might be pleased and give the people plenty food, good fortune, good health, long life, and lots of other material benefits. For the gods to be pleased everything had to be carried out with a very precise formula, so mathematical accuracy was seen to be of the utmost importance. We should also note that there were two types of sacrificial rites, one being a large public gathering while the other was a small family affair. Different types of altars were necessary for the two different types of ceremony. All that is known of Vedic mathematics is contained in the Sulbasutras. This in itself gives us a problem, for we do not know if these people undertook mathematical investigations for their own sake, as for example the ancient Greeks did, or whether they only studied mathematics to solve problems necessary for their religious rites. Some historians have argued that mathematics, in particular geometry, must have also existed to support astronomical work being undertaken around the same period. Certainly the Sulbasutras do not contain any proofs of the rules which they describe. Some of the rules, such as the method of constructing a square of area equal to a given rectangle, are exact. Others, such as constructing a square of area equal to that of a given circle, are approximations. We shall look at both of these examples below but the point we wish to make here is that the Sulbasutras make no distinction between the two. Did the writers of the Sulbasutras know which methods were exact and which were approximations? The Sulbasutras were written by a scribe, although he was not the type of scribe who merely makes a copy of an existing document but one who put in considerable content and all the mathematical results may have been due to these scribes. We know nothing of the men who wrote the Sulbasutras other than their names and a rough indication of the period in which they lived. Like many ancient mathematicians our only knowledge of them is their writings. The most important of these documents are the Baudhayana Sulbasutra written about 800 BC and the Apastamba Sulbasutra written about 600 BC. Historians of mathematics have also studied and written about other Sulbasutras of lesser importance such as the Manava Sulbasutra written about 750 BC and the Katyayana Sulbasutra written about 200 BC. Let us now examine some of the mathematics contained within the Sulbasutras. The first result which was clearly known to the authors is Pythagoras's theorem. The Baudhayana Sulbasutra gives only a special case of the theorem explicitly:- The rope which is stretched across the diagonal of a square produces an area double the size of the original square. The Katyayana Sulbasutra however, gives a more general version:- The rope which is stretched along the length of the diagonal of a rectangle produces an area which the vertical and horizontal sides make together. The diagram on the right illustrates this result. Note here that the results are stated in terms of "ropes". In fact, although sulbasutras originally meant rules governing religious rites, sutras came to mean a rope for measuring an altar. While thinking of explicit statements of Pythagoras's theorem, we should note that as it is used frequently there are many examples of Pythagorean triples in the Sulbasutras. For example (5, 12, 13), (12, 16, 20), (8, 15, 17), (15, 20, 25), (12, 35, 37), (15, 36, 39), (5/2 , 6, 13/2), and (15/2 , 10, 25/2) all occur. Now the Sulbasutras are really construction manuals for geometric shapes such as squares, circles, rectangles, etc. and we illustrate this with some examples. The first construction we examine occurs in most of the different Sulbasutras. It is a construction, based on Pythagoras's theorem, for making a square equal in area to two given unequal squares. Consider the diagram on the right. ABCD and PQRS are the two given squares. Mark a point X on PQ so that PX is equal to AB. Then the square on SX has area equal to the sum of the areas of the squares ABCD and PQRS. This follows from Pythagoras's theorem since SX2 = PX2 + PS2. The next construction which we examine is that to find a square equal in area to a given rectangle. We give the version as it appears in the Baudhayana Sulbasutra. Consider the diagram on the right. The rectangle ABCD is given. Let L be marked on AD so that AL = AB. Then complete the square ABML. Now bisect LD at X and divide the rectangle LMCD into two equal rectangles with the line XY. Now move the rectangle XYCD to the position MBQN. Complete the square AQPX. Now the square we have just constructed is not the one we require and a little more work is needed to complete the work. Rotate PQ about Q so that it touches BY at R. Then QP = QR and we see that this is an ideal "rope" construction. Now draw RE parallel to YP and complete the square QEFG. This is the required square equal to the given rectangle ABCD. The Baudhayana Sulbasutra offers no proof of this result (or any other for that matter) but we can see that it is true by using Pythagoras's theorem. EQ2 = QR2 - RE2 = QP2 - YP2 = ABYX + BQNM = ABYX + XYCD All the Sulbasutras contain a method to square the circle. It is an approximate method based on constructing a square of side 13/15 times the diameter of the given circle as in the diagram on the right. This corresponds to taking π = 4 × (13/15)2 = 676/225 = 3.00444 so it is not a very good approximation and certainly not as good as was known earlier to the Babylonians. It is worth noting that many different values of π appear in the Sulbasutras, even several different ones in the same text. This is not surprising for whenever an approximate construction is given some value of π is implied. The authors thought in terms of approximate constructions, not in terms of exact constructions with π but only having an approximate value for it. For example in the Baudhayana Sulbasutra, as well as the value of 676/225, there appears 900/289 and 1156/361. In different Sulbasutras the values 2.99, 3.00, 3.004, 3.029, 3.047, 3.088, 3.1141, 3.16049 and 3.2022 can all be found; see . In the value π = 25/8 = 3.125 is found in the Manava Sulbasutras. In in addition to examining the problem of squaring the circle as given by Apastamba, the authors examine the problem of dividing a segment into seven equal parts which occurs in the same Sulbasutra. The Sulbasutras also examine the converse problem of finding a circle equal in area to a given square. Consider the diagram on the right. The following construction appears. Given a square ABCD find the centre O. Rotate OD to position OE where OE passes through the midpoint P of the side of the square DC. Let Q be the point on PE such that PQ is one third of PE. The required circle has centre O and radius OQ. Again it is worth calculating what value of π this implies to get a feel for how accurate the construction is. Now if the square has side 2a then the radius of the circle is r where r = OE - EQ = √2a - 2/3(√2a - a) = a (√2/3 + 2/3). Then 4a 2 = πa2 (√2/3 + 2/3)2 which gives π = 36/(√2 + 2)2 = 3.088. As a final look at the mathematics of the Sulbasutras we examine what may be the most remarkable. Both the Apastamba Sulbasutra and the Katyayana Sulbasutra give the following approximation to √2:- Increase a unit length by its third and this third by its own fourth less the thirty-fourth part of that fourth. Now this gives √2 = 1 + 1/3 + 1/(3 × 4) - 1/(3 × 4 × 34) = 577/408 which is, to nine places, 1.414215686. Compare the correct value √2 = 1.414213562 to see that the Apastamba Sulbasutra has the answer correct to five decimal places. Of course no indication is given as to how the authors of the Sulbasutras achieved this remarkable result. Datta, in 1932, made a beautiful suggestion as to how this approximation may have been reached. In Datta considers a diagram similar to the one on the right. The most likely reason for the construction was to build an altar twice the size of one already built. Datta's suggestion involves taking two squares and cutting up the second square and assembling it around the first square to give a square twice the size, thus having side √2. The second square is cut into three equal strips, and strips 1 and 2 placed around the first square as indicated in the diagram. The third strip has a square cut off the top and placed in position 3. We now have a new square but some of the second square remains and still has to be assembled around the first. Cut the remaining parts (two-thirds of a strip) into eight equal strips and arrange them around the square we are constructing as in the diagram. We have now used all the parts of the second square but the new figure we have constructed is not quite a square having a small square corner missing. It is worth seeing what the side of this "not quite a square" is. It is 1 + 1/3 + 1/(3 × 4) which, of course, is the first three terms of the approximation. Now Datta argues in that to improve the "not quite a square" the Sulbasutra authors could have calculated how broad a strip one needs to cut off the left hand side and bottom to fill in the missing part which has area (1/12)2. If x is the width one cuts off then 2 × x × (1 + 1/3 + 1/12) = (1/12)2. This has the solution x = 1/(3 × 4 × 34) which is approximately 0.002450980392. We now have a square the length of whose sides is 1 + 1/3 + 1/(3 × 4) - 1/(3 × 4 × 34) which is exactly the approximation given by the Apastamba Sulbasutra. Of course we have still made an approximation since the two strips of breadth x which we cut off overlapped by a square of side x in the bottom left hand corner. If we had taken this into account we would have obtained the equation 2 × x × (1 + 1/3 + 1/12) - x2 = (1/12)2 for x which leads to x = 17/12 - √2 which is approximately equal to 0.002453105. Of course we cannot take this route since we have arrived back at a value for x which involves √2 which is the quantity we are trying to approximate! In Gupta gives a simpler way of obtaining the approximation for √2 than that given by Datta in . He uses linear interpolation to obtain the first two terms, he then corrects the two terms so obtaining the third term, then correcting the three terms obtaining the fourth term. Although the method given by Gupta is simpler (and an interesting contribution) there is certainly something appealing in Datta's argument and somehow a feeling that this is in the spirit of the Sulbasutras. Of course the method used by these mathematicians is very important to understanding the depth of mathematics being produced in India in the middle of the first millennium BC. If we follow the suggestion of some historians that the writers of the Sulbasutras were merely copying an approximation already known to the Babylonians then we might come to the conclusion that Indian mathematics of this period was far less advanced than if we follow Datta's suggestion. References (9 books/articles) Other Web sites: Article by: J J O'Connor and E F Robertson |History Topics Index||Ancient Indian Mathematics index| |Main index||Biographies Index |Famous curves index||Birthplace Maps |Mathematicians of the day||Anniversaries for the year |Search Form|| Societies, honours, etc The URL of this page is:
http://www-history.mcs.st-andrews.ac.uk/HistTopics/Indian_sulbasutras.html
13
54
Math 130-03, Lab 9 This lab is about inverse functions, including logarithm functions, and their derivatives. Logarithm functions are defined as inverse functions of exponential functions. It's a good idea to be familiar with the exponential functions, which are of the form for some positive number . Move the slider in the following applet to see how the function depends on the value of . There is one particular exponential function that is very important in calculus. The number , which about 2.718, has the property that . That is, the exponential function is its own derivative. It has the property that . To verify this, try entering e^xas the function in the following applet. You can move the tangent line along the graph by clicking and dragging on the graph. What if you enter a different exponential function, such as or ? Try it and see. Notice that the derivatives of these functions make use of a new function named ``ln''. This function is actually the logarithm function to the base . That is, . Thus, is the inverse function of . Try entering as the function in the above applet. Check out the derivative of this function, as reported by the applet. If and are inverse functions, then for all in the domain of . We can investigate derivatives of inverse functions using the function composition applet, which you have seen previously. We have already noted that the product of the slope of the tangent lines to the two functions is equal to the slopes of the tangent line to the composition. This is essentially what the chain rule says. Now, we can think about what this says when the two functions are inverse functions. In that case, the composition function is just , and its slope is 1. Click the button to open the applet. The applet is set up to show an exponential and a logarithmic function. These are inverse functions. You can drag the red square to change the points at which the tangent lines are drawn. Answer the following questions, based on this applet: (a) As you know from working with the chain rule, you have to be careful about which input values are used for functions in your formulas. In the applet, the tangent line to is shown at the point where the input value is . Explain how the formula can be deduced from the fact that the slope of the third tangent line in the applet is equal to the product of the slopes of the other two tangent lines. Keep in mind that in this example, is . (c) Here is an unrelated question about inverse functions. The function tan(x) does not have an inverse, since it is not one-to-one. However, we can restrict the domain of this function to get a one-to-one function. The function f(x)=tan(x), for -pi/2 < x < pi/2, does have an inverse. The inverse function is denoted arctan(x) Use the applet to look at the functions arctan(x). Try to explain what you see. The second exercise is to verify the formulas for the derivative of an inverse function. This can be done using the chain rule. The discussion assumes that the inverse of a differentiable function is also differentiable, but that should be easy to believe, based on the graph. (a) Use the chain rule to show that for any differentiable function . (b) Since is the inverse function of , it satisfies . Apply the operator to both sides of this equation, and use the result to deduce that . Explain your reasoning. (c) Let be any differentiable function that has an inverse. We know that f-1(f(x)) = x. Differentiate both sides of this equation and apply the chain rule to show that . (d) Let . Note that . Since the derivative of this function is always positive, it has an inverse function. Let be the inverse function. Find the value of . Explain your answer. Now that you have the formulas for the derivatives of and , you can combine these formulas with all the other rules that you already know for differentiation. Compute the following derivatives, showing each step in your work: David Eck, March 2001
http://math.hws.edu/eck/math130_s01/lab9/
13
51
by Hugh Ross In recent years these and other parameters for the universe have been more sharply defined and Now, nearly two dozen coincidences evincing design have 1. The gravitational coupling constant--i.e., the force of gravity, determines what kinds of stars are possible in the universe. If the gravitational force were slightly stronger, star formation would proceed more efficiently and all stars would be more massive than our sun by at least 1.4 times. These large stars are important in that they alone manufacture elements heavier than iron, and they alone disperse elements heavier than beryllium to the interstellar medium. Such elements are essential for the formation of planets as well as of living things in any form. However, these stars burn too rapidly and too unevenly to maintain life-supporting conditions on surrounding planets. Stars as small as our sun are necessary for On the other hand, if the gravitational force were slightly weaker, all stars would have less than 0.8 times the mass of the sun. Though such stars burn long and evenly enough to maintain life-supporting planets, no heavy elements essential for building such planets or life would exist. 2. The strong nuclear force coupling constant holds together the particles in the nucleus of an atom. If the strong nuclear force were slightly weaker, multi-proton nuclei would not hold together. Hydrogen would be the only element in the universe. If this force were slightly stronger, not only would hydrogen be rare in the universe, but also the supply of the various life-essential elements heavier than iron (elements resulting from the fission of very heavy elements) would be insufficient. Either way, life would be impossible. a 3. The weak nuclear force coupling constant affects the behavior of leptons. Leptons form a whole class of elementary particles (e.g., neutrinos, electrons, and photons) that do not participate in strong nuclear reactions. The most familiar weak interaction effect is radioactivity, in particular, the beta decay reaction: neutron » proton + electron + neutrino. The availability of neutrons as the universe cools through temperatures appropriate for nuclear fusion determines the amount of helium produced during the first few minutes of the big bang. If the weak nuclear force coupling constant were slightly larger, neutrons would decay more readily, and therefore would be less available. Hence, little or no helium would be produced from the big bang. Without the necessary helium, heavy elements sufficient for the constructing of life would not be made by the nuclear furnaces inside stars. On the other hand, if this constant were slightly smaller, the big bang would burn most or all of the hydrogen into helium, with a subsequent over-abundance of heavy elements made by stars, and again life would not be possible. A second, possibly more delicate, balance occurs for supernovae. It appears that an outward surge of neutrinos determines whether or not a supernova is able to eject its heavy elements into outer space. If the weak nuclear force coupling constant were slightly larger, neutrinos would pass through a supernova's envelope without disturbing it. Hence, the heavy elements produced by the supernova would remain in the core. If the constant were slightly smaller, the neutrinos would not be capable of blowing away the envelope. Again, the heavy elements essential for life would remain trapped forever within the cores of supernovae. 4. The electromagnetic coupling constant binds electrons to protons in atoms. The characteristics of the orbits of electrons about atoms determines to what degree atoms will bond together to form molecules. If the electromagnetic coupling constant were slightly smaller, no electrons would be held in orbits about nuclei. If it were slightly larger, an atom could not "share" an electron orbit with other atoms. Either way, molecules, and hence life, would be 5. The ratio of electron to proton mass also determines the characteristics of the orbits of electrons about nuclei. A proton is 1,836 times more massive than an electron. If the electron to proton mass ratio were slightly larger or slightly smaller, again, molecules would not form, and life would be impossible. 6. The age of the universe governs what kinds of stars exist. It takes about three billion years for the first stars to form. It takes another ten or twelve billion years for supernovae to spew out enough heavy elements to make possible stars like our sun, stars capable of spawning rocky planets. Yet another few billion years is necessary for solar-type stars to stabilize sufficiently to support advanced life on any of its planets. Hence, if the universe were just a couple of billion years younger, no environment suitable for life would exist. However, if the universe were about ten (or more) billion years older than it is, there would be no solar-type stars in a stable burning phase in the right part of a galaxy. In other words, the window of time during which life is possible in the universe is relatively narrow. 7. The expansion rate of the universe determines what kinds of stars, if any, form in the universe. If the rate of expansion were slightly less, the whole universe would have recollapsed before any solar-type stars could have settled into a stable burning phase. If the universe were expanding slightly more rapidly, no galaxies (and hence no stars) would condense from the general expansion. How critical is this expansion rate? According to Alan Guth,6 it must be fine-tuned to an accuracy of one part in l0 ^ 55 . Guth, however, suggests that his inflationary model, given certain values for the four fundamental forces of physics, may provide a natural explanation for the critical expansion rate. 8. The entropy level of the universe affects the condensation of massive systems. The universe contains 100,000,000 photons for every baryon. This makes the universe extremely entropic, i.e., a very efficient radiator and a very poor engine. If the entropy level for the universe were slightly larger, no galactic systems would form (and therefore no stars). If the entropy level were slightly smaller, the galactic systems that formed would effectively trap radiation and prevent any fragmentation of the systems into stars. Either way the universe would be devoid of stars and, thus, of life. (Some models for the universe relate this coincidence to a dependence of entropy upon the gravitational coupling constant.7,8) 9. The mass of the universe (actually mass + energy, since E = mc 2) determines how much nuclear burning takes place as the universe cools from the hot big bang. If the mass were slightly larger, too much deuterium (hydrogen atoms with nuclei containing both a proton and a neutron) would form during the cooling of the big bang. Deuterium is a powerful catalyst for subsequent nuclear burning in stars. This extra deuterium would cause stars to burn much too rapidly to sustain life on any possible planet. On the other hand, if the mass of the universe were slightly smaller, no helium would be generated during the cooling of the big bang. Without helium, stars cannot produce the heavy elements necessary for life. Thus, we see a reason the universe is as big as it is. If it were any smaller (or larger), not even one planet like the earth would be possible. 10. The uniformity of the universe determines its stellar components. Our universe has a high degree of uniformity. Such uniformity is considered to arise most probably from a brief period of inflationary expansion near the time of the origin of the universe. If the inflation (or some other mechanism) had not smoothed the universe to the degree we see, the universe would have developed into a plethora of black holes separated by virtually empty space. On the other hand, if the universe were smoothed beyond this degree, stars, star clusters, and galaxies may never have formed at all. Either way, the resultant universe would be incapable of supporting 11. The stability of the proton affects the quantity of matter in the universe and also the radiation level as it pertains to higher life forms. Each proton contains three quarks. Through the agency of other particles (called bosons) quarks decay into antiquarks, pions, and positive electrons. Currently in our universe this decay process occurs on the average of only once per proton per 10 ^ 32 years.b If that rate were greater, the biological consequences for large animals and man would be catastrophic, for the proton decays would deliver lethal doses of radiation. On the other hand, if the proton were more stable (less easily formed and less likely to decay), less matter would have emerged from events occurring in the first split second of the universe's existence. There would be insufficient matter in the universe for life to be possible. 12. The fine structure constants relate directly to each of the four fundamental forces of physics (gravitational, electromagnetic, strong nuclear, and weak nuclear). Compared to the coupling constants, the fine structure constants typically yield stricter design constraints for the universe. For example, the electromagnetic fine structure constant affects the opacity of stellar material. (Opacity is the degree to which a material permits radiant energy to pass through). In star formation, gravity pulls material together while thermal motions tend to pull it apart. An increase in the opacity of this material will limit the effect of thermal motions. Hence, smaller clumps of material will be able to overcome the resistance of the thermal motions. lf the electromagnetic fine structure constant were slightly larger, all stars would be less than 0.7 times the mass of the sun. If the electromagnetic fine structure constant were slightly smaller, all stars would be more than 1.8 times the mass of the sun. 13. The velocity of light can be expressed in a variety of ways as a function of any one of the fundamental forces of physics or as a function of one of the fine structure constants. Hence, in the case of this constant, too, the slightest change, up or down, would negate any possibility for life in the universe. 14. The 8Be, 12C, and 16O nuclear energy levels affect the manufacture and abundances of elements essential to life. Atomic nuclei exist in various discrete energy levels. A transition from one level to another occurs through the emission or capture of a photon that possesses precisely the energy difference between the two levels. The first coincidence here is that 5Be decays in just 10 -15 seconds. Because 8Be is so highly unstable, it slows down the fusion process. If it were more stable, fusion of heavier elements would proceed so readily that catastrophic stellar explosions would result. Such explosions would prevent the formation of many heavy elements essential for life. On the other hand, if 8Be were even more unstable, element production beyond 8Be would not occur. The second coincidence is that 12C happens to have a nuclear energy level very slightly above the sum of the energy levels for 8Be and 4He. Anything other than this precise nuclear energy level for 12C would guarantee insufficient carbon production for life. The third coincidence is that 16O has exactly the right nuclear energy level either to prevent all the carbon from turning into oxygen or to facilitate sufficient production of 16O for life. Fred Hoyle, who discovered these coincidences in 1953, concluded that "a superintellect has monkeyed with physics, as well as with chemistry 15. The distance between stars affects the orbits and even the existence of planets. The average distance between stars in our part of the galaxy is about 30 trillion miles. If this distance were slightly smaller, the gravitational interaction between stars would be so strong as to destabilize planetary orbits. This destabilization would create extreme temperature variations on the planet. If this distance were slightly larger, the heavy element debris thrown out by supernovae would be so thinly distributed that rocky planets like earth would never form. The average distance between stars is just right to make possible a planetary system such as our own. 16. The rate of luminosity increase for stars affects the temperature conditions on surrounding planets. Small stars, like the sun, settle into a stable burning phase once the hydrogen fusion process ignites within their core. However, during this stable burning phase such stars undergo a very gradual increase in their luminosity. This gradual increase is perfectly suitable for the gradual introduction of life forms, in a sequence from primitive to advanced, upon a planet. If the rate of increase were slightly greater, a runaway green house effect c would be felt sometime between the introduction of the primitive and the introduction of the advanced life forms. If the rate of increase were slightly smaller, a runaway freezing d of the oceans and lakes would occur. Either way, the planet's temperature would become too extreme for advanced life or even for the long-term survival of primitive life. This list of sensitive constants is by no means complete. Yet it demonstrates why a growing number of physicists and astronomers have become convinced that the universe was not only divinely brought into existence but also divinely designed. American astronomer George Greenstein expresses his thoughts: As we survey all the evidence, the thought insistently arises that some supernatural agency--or, rather, Agency--must be involved. Is it possible that suddenly, without intending to, we have stumbled upon scientific proof of the existence of a Supreme Being? Was it God who stepped in and so providentially crafted the cosmos for our The Earth as a Fit Habitat It is not just the universe that bears evidence for design. The earth itself reveals such evidence. Frank Drake, Carl Sagan, and Iosef Shklovsky were among the first astronomers to concede this point when they attempted to estimate the number of planets in the universe with environments favorable for the support of life. In the early 1960's they recognized that only a certain kind of star with a planet just the right distance from that star would provide the necessary conditions for life.12 On this basis they made some rather optimistic estimates for the probability of finding life elsewhere in the universe. Shklovsky and Sagan, for example, claimed that 0.001 percent of all stars could have a planet upon which advanced While their analysis was a step in the right direction, it overestimated the range of permissible star types and the range of permissible planetary distances. It also ignored many other significant factors. A sample of parameters sensitive for the support of life on a planet are listed in Table 1. Table 1: Evidence for the design of the sun-earth-moon The following parameters cannot exceed certain limits without disturbing the earth's capacity to support life. Some of these parameters are more narrowly confining than others. For example, the first parameter would eliminate only half the stars from candidacy for life-supporting systems, whereas parameters five, seven, and eight would each eliminate more than ninety-nine in a hundred Not only must the parameters for life support fall within a certain restrictive range, but they must remain relatively constant over time. And we know that several, such as parameters fourteen through nineteen, are subject to potentially In addition to the parameters listed here, there are others, such as the eccentricity of a planet's orbit, that have an upper (or a lower) limit only. 1. number of star companions if more than one: tidal interactions would disrupt planetary orbits if less than one: not enough heat produced for life 2. parent star birth date if more recent: star would not yet have reached stable burning phase if less recent: stellar system would not yet contain enough heavy 3. parent star age if older: luminosity of star would not be sufficiently stable if younger: luminosity of star would not be sufficiently stable 4. parent star distance from center of galaxy if greater: not enough heavy elements to make rocky planets if less: stellar density and radiation would be too great 5. parent star mass if greater: luminosity output from the star would not be sufficiently if less: range of distances appropriate for life would be too narrow; tidal forces would disrupt the rotational period for a planet of the 6. parent star color if redder: insufficient photosynthetic response if bluer: insufficient photosynthetic response 7. surface gravity if stronger: planet's atmosphere would retain huge amounts of ammonia if weaker: planet's atmosphere would lose too much water 8. distance from parent star if farther away: too cool for a stable water cycle if closer: too warm for a stable water cycle 9. thickness of crust if thicker: too much oxygen would be transferred from the atmosphere to the crust if thinner: volcanic and tectonic activity would be too great 10. rotation period if longer: diurnal temperature differences would be too great if shorter: atmospheric wind velocities would be too great 11. gravitational interaction with a moon if greater: tidal effects on the oceans, atmosphere, and rotational period would be too severe if less: earth's orbital obliquity would change too much causing 12. magnetic field if stronger: electromagnetic storms would be too severe if weaker: no protection from solar wind particles 13. axial tilt if greater: surface temperature differences would be too great if less: surface temperature differences would be too great 14. albedo (ratio of reflected light to total amount falling on if greater:. runaway ice age would develop if less: runaway greenhouse effect would develop 15. oxygen to nitrogen ratio in atmosphere if larger: life functions would proceed too quickly if smaller: life functions would proceed too slowly 16. carbon dioxide and water vapor levels in atmosphere if greater: runaway greenhouse effect would develop if less: insufficient greenhouse effect 17. ozone level in atmosphere if greater: surface temperatures would become too low if less: surface temperatures would be too high; too much uv radiation 18. atmospheric electric discharge rate if greater: too much fire destruction if less: too little nitrogen fixing in the soil 19. seismic activity if greater: destruction of too many life-forms if less: nutrients on ocean floors would not be uplifted About a dozen other parameters, such as atmospheric chemical composition, currently are being researched for their sensitivity in the support of life. However, the nineteen listed in Table 1 in themselves lead safely to the conclusion that much fewer than a trillionth of a trillionth of a percent of all stars will have a planet capable of sustaining life. Considering that the universe contains only about a trillion galaxies, each averaging a hundred billion stars we can see that not even one planet would be expected, by natural processes alone, to possess the necessary conditions to sustain life. No wonder Robert Rood and James Trefil14 and others have surmised that intelligent physical life exists only on the earth. It seems abundantly clear that the earth, too, in addition to the universe, has experienced divine design. (*note, an updated list with 33 parameters plus a dozen more being researched can be found in "The Creator and the Cosmos" by Hugh Ross,, Copyright 1993 by Reasons To Believe . Revised edition, copyright 1995. NavPress, p131-145 a. The strong nuclear force is actually much more delicately balanced. An increase as small as two percent means that protons would never form from quarks (particles that form the building blocks of baryons and mesons). A similar decrease means that certain heavy elements essential for life would be unstable. b. Direct observations of proton decay have yet to be confirmed. Experiments simply reveal that the average proton lifetime must exceed 1032 years.9 However, if the average proton lifetime exceeds about 1034 years, than there would be no physical means for generating the matter that is observed in the universe. c. An example of the greenhouse effect is a locked car parked in the sun. Visible light from the sun passes easily through the windows of the car, is absorbed by the interior, and reradiated as infrared light. But, the windows will not permit the passage of infrared radiation. Hence, heat accumulates in the car's interior. Carbon dioxide in the atmosphere works like the windows of a car. The early earth had much more carbon dioxide in its atmosphere. However, the first plants extracted this carbon dioxide and released oxygen. Hence, the increase in the sun's luminosity was balanced off by the decrease in the greenhouse effect caused by the lessened amount of carbon dioxide In the atmosphere. d. A runaway freezing would occur because snow and ice reflect better than other materials on the surface of the earth. Less solar energy is absorbed thereby lowering the surface temperature which in turn creates more snow and ice. e. The average number of planets per star is still largely unknown. The latest research suggests that only bachelor stars with characteristics similar to those of the sun may possess planets. Regardless, all researchers agree that the figure is certainly much less than one planet per star. f. The assumption is that all life is based on carbon. Silicon and boron at one time were considered candidates for alternate life chemistries. However, silicon can sustain amino acid chains no more than a hundred such molecules long. Boron allows a little more complexity but has the disadvantage of not being very abundant in g. One can easily get the impression from the physics literature that the Copenhagen interpretation of quantum mechanics is the only accepted philosophical explanation of what is going on in the micro world. According to this school of thought, "1) There is no reality in the absence of observation; 2) Observation creates reality." In addition to the Copenhagen interpretation physicist Nick Herbert outlines and critiques six different philosophical models for interpreting quantum events.35 Physicist and theologian Stanley Jaki outlines yet an eighth model.36 While a clear philosophical understanding of quantum reality is not yet agreed upon. physicists do agree on the results one expects from quantum events. h. Baryons are protons and other fundamental particles, such as neutrons, that decay into protons. i. A common rebuttal is that not all amino acids in organic molecules must be strictly sequenced. One can destroy or randomly replace about 1 amino acid out of 100 without doing damage to the function of the molecule. This is vital since life necessarily exists in a sequence—disrupting radiation environment. However, this is equivalent to writing a computer program that will tolerate the destruction of 1 statement of code out of 1001. In other words, this error-handling ability of organic molecules constitutes a far more unlikely occurrence than strictly sequenced molecules. 1. Wheeler, John A. "Foreword," in The Anthropic Cosmological Principle by John D. Barrow and Frank J. Tipler. (Oxford, U. K.: Clarendon Press, 1986), p. vii. 2. Franz, Marie-Louise. Patterns of Creativity Mirrored in Creation Myths. (Zurich: Spring, 1972). 3. Kilzhaber, Albert R. Myths, Fables, and Folktales. (New York: Holt, 1974), pp. 113-114. 4. Dirac, P. A. M. "The Cosmological Constants," in Nature 139. (1937), p. 323. 5. Dicke, Robert H. "Dirac's Cosmology and Mach's Principle," in Nature, 192. (1961), pp. 440-441. 6. Guth, Man H. "Inflationary Universe: A Possible Solution to the Horiwn and Flatness Problems," in Physical Review D, 23. (1981), p. 7. Carr, B. J. and Rees, M. J. "The Anthropic Principle and the Structure of the Physical World," in Nature, 278. (1979), p. 610. 8. Barrow, John D. and Tipler, Frank J. The Anthropic Cosmological Principle. (New York: Oxford University Press, 1986), pp. 401-402. 9. Trefil, James S. The Moment of Creation: Big Bang Physics from before the First Millisecond to the Present Universe. (New York: Scribner's Sons, 1983), pp. 141-142. 10. Hoyle, Fred. "The Universe: Past and Present Reflections," in Annual Review of Astronomy and Astrophysics. 20. (1982), p. 16. 11. Greenstein, George. The Symbiotic Universe: Life and Mind in the Cosmos. (New York: William Morrow, 1988), pp. 26-27. 12. Shklovskii, I. S. and Sagan, Carl. Intelligent Life in the Universe. (San Francisco: Holden-Day, 1966), pp. 343-350 13. Ibid., pp. 413. 14. Rood, Robert T. and Trefil, James S. Are We Alone? The Possibility of Extraterrestrial Civilizations. (NewYork: Charles Scribner's 15. Barrow, John D. and Tipler, Frank J. The Anthropic Cosmological Principle. (New York: Oxford University Press, 1986), pp. 510-575. 16. Anderson, Don L. "The Earth as a Planet: Paradigms and Paradoxes," in Science, 223. (1984), pp. 347-355. 17. Campbell, I. H. and Taylor, S. R. "No Water, No Granite - No Oceans, No Continents," in Geophysical Research Letters, 10. (1983), 18. Carter, Brandon. "The Anthropic Principle and Its Implications for Biological Evolution," in Philosophical Transactions of the Royal Society of London, Series A, 310. (1983), pp. 352-363. 19. Hammond, Allen H. "The Uniqueness of the Earth's Climate," in Science, 187. (1975), p. 245. 20. Toon, Owen B. and Olson, Steve. "The Warm Earth," in Science 85, October. (1985), pp. 50-57. 21. Gale, George. "The Anthropic Principle," in Scientific American, 245, No. 6. (1981), pp. 154-171. 22. Ross, Hugh. Genesis One: A Scientific Perspective. (Pasadena, California: Reasons To Believe, 1983), pp. 6-7 23. Cotnell, Ron. The Remarkable Spaceship Earth. (Denver, Colorado: Accent Books, 1982). 24. Ter Harr, D. "On the Origin of the Solar System," in Annual Review of Astronomy and Astrophysics, 5. (1967), pp. 267-278. 25. Greenstein, George. The Symbiotic Universe: Life and Mind in the Cosmos. (New York: William Morrow, 1988), pp. 68-97. 26. Templeton, John M. "God Reveals Himself in the Astronomical and in the infinitesimal," in Journal of the American Scientific Affiliation, December. (1984), pp. 196-198. 27. Hart, Michael H. "The Evolution of the Atmosphere of the Earth," in Icarus, 33. (1978), pp. 23-39. 28. Hart, Michael H. "Habitable Zones about Main Sequence Stars," in Icarus, 37. (1979), pp. 351-357. 29. Owen, Tobias, Cess, Robert D., and Ramanathan, V. "Enhanced CO2 Greenhonse to Compensate for Reduced Solar Luminosity on Early Earth," in Nature, 277. (1979), pp. 640-641. 30. Ward, William R. "Comments on the Long-Term Stability of the Earth's Obliquity," in Icarus, 50. (1982), pp. 444-448. 31. Gribbin, John. "The Origin of Life: Earth's Lucky Break," in Science Digest, May. (1983), pp. 36-102 32. Davies, Paul. The Cosmic Blueprint: New Discoveries in Nature's Creative Ability to Order the Universe. (New York: Simon and Schuster, 1988), p. 203. 33. Wheeler, John Archibald. "Bohr, Einstein, and the Strange lesson of the Quantum," in Mind in Nature, edited by Richard Q. Elvee. (New York: Harper and Row, 1981), p.18. 34. Greenstein, George. The Symbiotic Universe: Life and Mind in the Cosmos. (New York: William Morrow, 1988), p. 223. 35. Herbert, Nick. Quantum Reality: Beyond the New Physics: An Excursion into Metaphysics and the Meaning of Reality. (New York: Anchor Books, Doubleday, 1987), in particular pp. 16-29. 36. Jaki, Stanley L. Cosmos and Creator. (Edinburgh, U. K.: Scottish Academic Press, 1980), pp. 96-98. 37. Trefil, James S. The Moment of Creation. (New York: Charles Scribner's Sons, 1983), pp. 91-101. 38. Barrow, John D. and Tipler, Frank J. The Anthropic Cosmological Principle. (New York: Oxford University Press, 1986). 39. Ibid., p. 677. 40. Ibid., pp. 677, 682. 41. Gardner, Martin. "WAP, SAP, PAP, and FAP." in The New York Review of Books, 23, May 8, No. 8. (1986), pp. 22-25. 42. The Holy Bible, New International Version. Colossians 2:8. 43. Yockey, Hubert P. "On the Information Content of Cytochrome c," in Journal of Theoretical Biology, 67. (1977), pp. 345-376. 44. Yockey, Hubert P. "An Application of Information Theory to the Central Dogma and Sequence Hypothesis," in Journal of Theoretical Biology, 46. (1974), pp. 369-406. 45. Yockey, Hubert P. "Self Organization Origin of Life Scenarios and Information Theory," in Journal of Theoretical Biology, 91(1981), 46. Lake, James A. "Evolving Ribosome Structure: Domains in Archaebacteria, Eubacteria, Eocytes, and Eukaryotes," in Annual Review of Biochemistry, 54. (1985), pp. 507-530. 47. Dufton, M. J. "Genetic Code Redundancy and the Evolutionary Stability of Protein Secondary Structure," in Journal of Theoretical Biology, 116. (1985), pp. 343-348. 48. Yockey, Hubert P. "Do Overlapping Genes Violate Molecular Biology and the Theory of Evolution," in Journal of Theoretical Biology, 80. (1979), pp. 21-26. 49. Abelson, John "RNA Processing and the Intervening Sequence Problem," in Annual Review of Biochemistry, 48. (1979), pp. 50. Hinegardner, Ralph T. and Engleberg, Joseph. "Rationale for a Universal Genetic Code," in Science, 142. (1963), pp. 1083-1085. 51. Neurath, Hans. "Protein Structure and Enzyme Action," in Reviews of Modern Physics, 31. (1959), pp.185-190. 52. Hoyle, Fred and Wickramasinghe. Evolution From Space: A Theory of Cosmic Creationism. (New York: Simon and Schuster, 1981), 14-97. 53. Thaxton, Charles B., Bradley, Walter L., and Olsen, Roger. The Mystery of Life's Origin: Reassessing Current Theories. (New York: Philosophical Library, 1984). 54. Shapiro, Robert. Origins: A Skeptic's Guide to the Creation of Life on Earth. (New York: Summit Books, 1986), 117-131. 55. Ross, Hugh. Genesis One: A Scientific Perspective, second edition. (Pasadena, Calif.: Reasons To Believe, 1983), pp. 9-10. 56. Yockey, Hubert P. "A Calculation of the Probability of Spontaneous Biogenesis by Information Theory," in Journal of Theoretical Biology, 67. (1977), pp. 377-398. 57. Duley, W. W. "Evidence Against Biological Grains in the Interstellar Medium," in Quarterly Journal of the Royal Astronomical Society, 25. (1984), pp. 109-113. 58. Kok, Randall A., Taylor, John A., and Bradley, Walter L. "A Statistical Examination of Self-Ordering of Amino Acids in Proteins," in Origins of Life and Evolution of the Biosphere, 18. (1988), pp. 135-142.
http://www.bibliotecapleyades.net/esp_diseno_antropico_2.htm
13
113
Flow measurement is the quantification of bulk fluid movement. It can be measured in a variety of ways. Both gas and liquid flow can be measured in volumetric or mass flow rates, such as litres per second or kilograms per second. These measurements can be converted between one another if the material's density is known. The density for a liquid is almost independent of the liquid conditions; however, this is not the case for a gas, the density of which depends greatly upon pressure, temperature and to a lesser extent, the gas composition. When gases or liquids are transferred for their energy content, such as the sale of natural gas, the flow rate may also be expressed in terms of energy flow, such as GJ/hour or BTU/day. The energy flow rate is the volume flow rate multiplied by the energy content per unit volume or mass flow rate multiplied by the energy content per unit mass. Where accurate energy flow rate is desired, most flow meters will be used to calculate the volume or mass flow rate which is then adjusted to the energy flow rate by the use of a flow computer. Gases are compressible and change volume when placed under pressure or are heated or cooled. A volume of gas under one set of pressure and temperature conditions is not equivalent to the same gas under different conditions. References will be made to "actual" flow rate through a meter and "standard" or "base" flow rate through a meter with units such as acm/h (actual cubic meters per hour), Kscm/h (Kilo standard cubic meters per hour), or MSCFD (thousands of standard cubic feet per day). For liquids, various units are used depending upon the application and industry, but might include gallons (U.S. liquid or imperial) per minute, liters per second, bushels per minute or, when describing river flows, cumecs (cubic metres per second) or acre-feet per day. In oceanography a common unit to measure volume transport (volume of water transported by a current for example) is a Sverdrup (Sv) equivalent to 106 m 3 / s. There are several types of mechanical flow meter. Perhaps the simplest way to measure volumetric flow is to measure how long it takes to fill a container. A simple example is using a bucket of known volume, filled by a hose. The stopwatch is started when the flow starts, and stopped when the bucket overflows. The volume divided by the time gives the flow. The bucket-and-stopwatch method is an off-line method, meaning that the measurement cannot be taken without interrupting the normal flow. Because they are used for domestic water measurement, piston meters, also known as rotary piston or semi-positive displacement meters, are the most common flow measurement devices in the UK and are used for almost all meter sizes up to and including 40 mm (1 1/2"). The piston meter operates on the principle of a piston rotating within a chamber of known volume. For each rotation, an amount of water passes through the piston chamber. Through a gear mechanism and, sometimes, a magnetic drive, a needle dial and odometer type display are advanced. The variable area (VA) meter, also commonly called a rotameter, consists of a tapered tube, typically made of glass, with a float inside that is pushed up by fluid flow and pulled down by gravity. As flow rate increases, greater viscous and pressure forces on the float cause it to rise until it becomes stationary at a location in the tube that is wide enough for the forces to balance. Floats are made in many different shapes, with spheres and spherical ellipses being the most common. Some are designed to spin visibly in the fluid stream to aid the user in determining whether the float is stuck or not. Rotameters are available for a wide range of liquids but are most commonly used with water or air. They can be made to reliably measure flow down to 1% accuracy. The turbine flow meter (better described as an axial turbine) translates the mechanical action of the turbine rotating in the liquid flow around an axis into a user-readable rate of flow (gpm, lpm, etc.). The turbine tends to have all the flow traveling around it. The turbine wheel is set in the path of a fluid stream. The flowing fluid impinges on the turbine blades, imparting a force to the blade surface and setting the rotor in motion. When a steady rotation speed has been reached, the speed is proportional to fluid velocity. Turbine flow meters are used for the measurement of natural gas and liquid flow. The Woltmann meter comprises a rotor with helical blades inserted axially in the flow, much like a ducted fan; it can be considered a type of turbine flow meter. They are commonly referred to as helix meters, and are popular at larger sizes. This is similar to the single jet meter, except that the impeller is small with respect to the width of the pipe, and projects only partially into the flow, like the paddle wheel on a Mississippi riverboat. A multiple jet or multijet meter is a velocity type meter which has an impeller which rotates horizontally on a vertical shaft. The impeller element is in a housing in which multiple inlet ports direct the fluid flow at the impeller causing it to rotate in a specific direction in proportion to the flow velocity. This meter works mechanically much like a single jet meter except that the ports direct the flow at the impeller equally from several points around the circumference of the element, not just one point; this minimizes uneven wear on the impeller and its shaft. The Pelton wheel turbine (better described as a radial turbine) translates the mechanical action of the Pelton wheel rotating in the liquid flow around an axis into a user-readable rate of flow (gpm, lpm, etc.). The Pelton wheel tends to have all the flow traveling around it with the inlet flow focused on the blades by a jet. The original Pelton wheels were used for the generation of power and consisted of a radial flow turbine with "reaction cups" which not only move with the force of the water on the face but return the flow in opposite direction using this change of fluid direction to further increase the efficiency of the turbine. An oval gear meter is a positive displacement meter that uses two or more oblong gears configured to rotate at right angles to one another, forming a tee shape. Such a meter has two sides, which can be called A and B. No fluid passes through the center of the meter, where the teeth of the two gears always mesh. On one side of the meter (A), the teeth of the gears close off the fluid flow because the elongated gear on side A is protruding into the measurement chamber, while on the other side of the meter (B), a cavity holds a fixed volume of fluid in a measurement chamber. As the fluid pushes the gears, it rotates them, allowing the fluid in the measurement chamber on side B to be released into the outlet port. Meanwhile, fluid entering the inlet port will be driven into the measurement chamber of side A, which is now open. The teeth on side B will now close off the fluid from entering side B. This cycle continues as the gears rotate and fluid is metered through alternating measurement chambers. Permanent magnets in the rotating gears can transmit a signal to an electric reed switch or current transducer for flow measurement. This is the most commonly used measurement system for measuring water supply. The fluid, most commonly water, enters in one side of the meter and strikes the nutating disk, which is eccentrically mounted. The disk must then "wobble" or nutate about the vertical axis, since the bottom and the top of the disk remain in contact with the mounting chamber. A partition separates the inlet and outlet chambers. As the disk nutates, it gives direct indication of the volume of the liquid that has passed through the meter as volumetric flow is indicated by a gearing and register arrangement, which is connected to the disk. It is reliable for flow measurements within 1 percent. There are several types of flow meter that rely on Bernoulli's principle, either by measuring the differential pressure within a constriction, or by measuring static and stagnation pressures to derive the dynamic pressure. A Venturi meter constricts the flow in some fashion, and pressure sensors measure the differential pressure before and within the constriction. This method is widely used to measure flow rate in the transmission of gas through pipelines, and has been used since Roman Empire times.The coefficient of discharge of Venturi meter ranges from 0.93 to 0.97. An orifice plate is a plate with a hole through it, placed in the flow; it constricts the flow, and measuring the pressure differential across the constriction gives the flow rate. It is basically a crude form of Venturi meter, but with higher energy losses. There are three type of orifice: concentric, eccentric, and segmental. The Dall tube is a shortened version of a Venturi meter, with a lower pressure drop than an orifice plate. As with these flow meters the flow rate in a Dall tube is determined by measuring the pressure drop caused by restriction in the conduit. The pressure differential is typically measured using diaphragm pressure transducers with digital readout. Since these meters have significantly lower permanent pressure losses than orifice meters, Dall tubes are widely used for measuring the flow rate of large pipeworks. A Pitot tube is a pressure measuring instrument used to measure fluid flow velocity by determining the stagnation pressure. Bernoulli's equation is used to calculate the dynamic pressure and hence fluid velocity. Multi-hole pressure probes (also called impact probes) extend the theory of pitot tube to more than one dimension. A typical impact probe consists of three or more holes (depending on the type of probe) on the measuring tip arranged in a specific pattern. More holes allow the instrument to measure the direction of the flow velocity in addition to its magnitude (after appropriate calibration). Three holes arranged in a line allow the pressure probes to measure the velocity vector in two dimensions. Introduction of more holes, e.g. five holes arranged in a "plus" formation, allow measurement of the three-dimensional velocity vector. Optical flow meters use light to determine flow rate. Small particles which accompany natural and industrial gases pass through two laser beams focused in a pipe by illuminating optics. Laser light is scattered when a particle crosses the first beam. The detecting optics collects scattered light on a photodetector, which then generates a pulse signal. If the same particle crosses the second beam, the detecting optics collect scattered light on a second photodetector, which converts the incoming light into a second electrical pulse. By measuring the time interval between these pulses, the gas velocity is calculated as V = D / T where D is the distance between the laser beams and T is the time interval. Laser-based optical flow meters measure the actual speed of particles, a property which is not dependent on thermal conductivity of gases, variations in gas flow or composition of gases. The operating principle enables optical laser technology to deliver highly accurate flow data, even in challenging environments which may include high temperature, low flow rates, high pressure, high humidity, pipe vibration and acoustic noise. Optical flow meters are very stable with no moving parts and deliver a highly repeatable measurement over the life of the product. Because distance between the two laser sheets does not change, optical flow meters do not require periodic calibration after their initial commissioning. Optical flow meters require only one installation point, instead of the two installation points typically required by other types of meters. A single installation point is simpler, requires less maintenance and is less prone to errors. Optical flow meters are capable of measuring flow from 0.1 m/s to faster than 100 m/s (1000:1 turn down ratio) and have been demonstrated to be effective for the measurement of flare gases, a major global contributor to the emissions associated with climate change. The level of the water is measured at a designated point behind a hydraulic structure (a weir or flume) using various means (bubblers, ultrasonic, float, and differential pressure are common methods). This depth is converted to a flow rate according to a theoretical formula of the form Q = KHX where Q is the flow rate, K is a constant, H is the water level, and X is an exponent which varies with the device used; or it is converted according to empirically derived level/flow data points (a "flow curve"). The flow rate can then integrated over time into volumetric flow. The cross-sectional area of the flow is calculated from a depth measurement and the average velocity of the flow is measured directly (Doppler and propeller methods are common). Velocity times the cross-sectional area yields a flow rate which can be integrated into volumetric flow. Acoustic Doppler velocimetry (ADV) is designed to record instantaneous velocity components at a single point with a relatively high frequency. Measurements are performed by measuring the velocity of particles in a remote sampling volume based upon the Doppler shift effect. Thermal mass flow meters generally use combinations of heated elements and temperature sensors to measure the difference between static and flowing heat transfer to a fluid and infer its flow with a knowledge of the fluid's specific heat and density. The fluid temperature is also measured and compensated for. If the density and specific heat characteristics of the fluid are constant, the meter can provide a direct mass flow readout, and does not need any additional pressure temperature compensation over their specified range. Technological progress has allowed the manufacture of thermal mass flow meters on a microscopic scale as MEMS sensors; these flow devices can be used to measure flow rates in the range of nanolitres or microlitres per minute. Thermal mass flow meter technology is used for compressed air, nitrogen, helium, argon, oxygen, and natural gas. In fact, most gases can be measured as long as they are fairly clean and non-corrosive. For more aggressive gases, the meter may be made out of special alloys (e.g. Hastelloy), and pre-drying the gas also helps to minimize corrosion. Another method of flow measurement involves placing a bluff body (called a shedder bar) in the path of the fluid. As the fluid passes this bar, disturbances in the flow called vortices are created. The vortices trail behind the cylinder, alternatively from each side of the bluff body. This vortex trail is called the Von Kármán vortex street after von Kármán's 1912 mathematical description of the phenomenon. The frequency at which these vortices alternate sides is essentially proportional to the flow rate of the fluid. Inside, atop, or downstream of the shedder bar is a sensor for measuring the frequency of the vortex shedding. This sensor is often a piezoelectric crystal, which produces a small, but measurable, voltage pulse every time a vortex is created. Since the frequency of such a voltage pulse is also proportional to the fluid velocity, a volumetric flow rate is calculated using the cross sectional area of the flow meter. The frequency is measured and the flow rate is calculated by the flowmeter electronics using the equation f = SV / L where f is the frequency of the vortices, L the characteristic length of the bluff body, V is the velocity of the flow over the bluff body, and S is the Strouhal number, which is essentially a constant for a given body shape within its operating limits. Modern innovations in the measurement of flow rate incorporate electronic devices that can correct for varying pressure and temperature (i.e. density) conditions, non-linearities, and for the characteristics of the fluid. The most common flow meter apart from mechanical flow meters is the magnetic flow meter, commonly referred to as a "mag meter" or an "electromag". A magnetic field is applied to the metering tube, which results in a potential difference proportional to the flow velocity perpendicular to the flux lines. The physical principle at work is Faraday's law of electromagnetic induction. The magnetic flow meter requires a conducting fluid, e.g. water, and an electrical insulating pipe surface, e.g. a rubber lined nonmagnetic steel tube. Ultrasonic flow meters measure the difference of the transit time of ultrasonic pulses propagating in and against flow direction. This time difference is a measure for the average velocity of the fluid along the path of the ultrasonic beam. By using the absolute transit times both the averaged fluid velocity and the speed of sound can be calculated. Using the two transit times tup and tdown and the distance between receiving and transmitting transducers L and the inclination angle α one can write the equations: where v is the average velocity of the fluid along the sound path and c is the speed of sound. Ultrasonic flow meters are used for the measurement of natural gas flow. One can also calculate the expected speed of sound for a given sample of gas; this can be compared to the speed of sound empirically measured by an ultrasonic flow meter and for the purposes of monitoring the quality of the flow meter's measurements. A drop in quality is an indication that the meter needs servicing. Measurement of the Doppler shift resulting in reflecting an ultrasonic beam off the flowing fluid is another recent innovation. By passing an ultrasonic beam through the tissues, bouncing it off a reflective plate, then reversing the direction of the beam and repeating the measurement, the volume of blood flow can be estimated. The frequency of the transmitted beam is affected by the movement of blood in the vessel and by comparing the frequency of the upstream beam versus downstream the flow of blood through the vessel can be measured. The difference between the two frequencies is a measure of true volume flow. A wide-beam sensor can also be used to measure flow independent of the cross-sectional area of the blood vessel. For the Doppler principle to work in a flowmeter it is mandatory that the flow stream contains sonically reflective materials, such as solid particles or entrained air bubbles. A related technology is acoustic Doppler velocimetry. Using the Coriolis effect that causes a laterally vibrating tube to distort, a direct measurement of mass flow can be obtained in a coriolis flow meter. Furthermore a direct measure of the density of the fluid is obtained. Coriolis measurement can be very accurate irrespective of the type of gas or liquid that is measured; the same measurement tube can be used for hydrogen gas and bitumen without recalibration. Coriolis flow meters can be used for the measurement of natural gas flow. Blood flow can be measured through the use of a monochromatic laser diode. The laser probe is inserted into a tissue and turned on, where the light scatters and a small portion is reflected back to the probe. The signal is then processed to calculate flow within the tissues. There are limitations to the use of a laser Doppler probe; flow within a tissue is dependent on volume illuminated, which is often assumed rather than measured and varies with the optical properties of the tissue. In addition, variations in the type and placement of the probe within identical tissues and individuals result in variations in reading. The laser Doppler has the advantage of sampling a small volume of tissue, allowing for great precision, but does not necessarily represent the flow within an entire organ. The flow meter is much more useful for relative rather than absolute measurements. Even though ideally the flowmeter should be unaffected by its environment, in practice this is unlikely to be the case. Often measurement errors originate from incorrect installation or other environment dependent factors. In Situ methods are used when flow meter is calibrated in the correct flow conditions. For pipe flows a so-called transit time method is applied where a radiotracer is injected as a pulse into the measured flow. The transit time is defined with the help of radiation detectors placed on the outside of the pipe. The volume flow is obtained by multiplying the measured average fluid flow velocity by the inner pipe cross section. This reference flow value is compared with the simultaneous flow value given by the flow measurement to be calibrated. The procedure is standardised (ISO 2975/VII for liquids and BS 5857-2.4 for gases). The best accredited measurement uncertainty for liquids and gases is 0.5 %. The radiotracer dilution method is used to calibrate open channel flow measurements. A solution with a known tracer concentration is injected at a constant known velocity into the channel flow. Downstream where the tracer solution is thoroughly mixed over the flow cross section, a continuous sample is taken and its tracer concentration in relation to that of the injected solution is determined. The flow reference value is determined by using the tracer balance condition between the injected tracer flow and the diluting flow. The procedure is standardised (ISO 9555-1 and ISO 9555-2 for liquid flow in open channels). The best accredited measurement uncertainty is 1 %.
http://www.thefullwiki.org/Flow_measurement
13
50
In mathematics and computational science, the Euler method is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who treated it in his book Institutionum calculi integralis (published 1768–70). The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size. It also suffers from stability problems. For these reasons, the Euler method is not often used in practice. It serves as the basis to construct more complicated methods. Informal geometrical description Consider the problem of calculating the shape of an unknown curve which starts at a given point and satisfies a given differential equation. Here, a differential equation can be thought of as a formula by which the slope of the tangent line to the curve can be computed at any point on the curve, once the position of that point has been calculated. The idea is that while the curve is initially unknown, its starting point, which we denote by is known (see the picture on top right). Then, from the differential equation, the slope to the curve at can be computed, and so, the tangent line. Take a small step along that tangent line up to a point Along this small step, the slope does not change too much, so will be close to the curve. If we pretend that is still on the curve, the same reasoning as for the point above can be used. After several steps, a polygonal curve is computed. In general, this curve does not diverge too far from the original unknown curve, and the error between the two curves can be made small if the step size is small enough and the interval of computation is finite. Formulation of the method Suppose that we want to approximate the solution of the initial value problem Choose a value for the size of every step and set . Now, one step of the Euler method from to is The value of is an approximation of the solution to the ODE at time : . The Euler method is explicit, i.e. the solution is an explicit function of for . While the Euler method integrates a first-order ODE, any ODE of order N can be represented as a first-order ODE: to treat the equation we introduce auxiliary variables and obtain the equivalent equation This is a first-order system in the variable and can be handled by Euler's method or, in fact, by any other scheme for first-order systems. Given the initial value problem we would like to use the Euler method to approximate . Using step size equal to 1 (h = 1) The Euler method is so first we must compute . In this simple differential equation, the function is defined by . We have By doing the above step, we have found the slope of the line that is tangent to the solution curve at the point . Recall that the slope is defined as the change in divided by the change in , or . The next step is to multiply the above value by the step size , which we take equal to one here: Since the step size is the change in , when we multiply the step size and the slope of the tangent, we get a change in value. This value is then added to the initial value to obtain the next value to be used for computations. The above steps should be repeated to find , and . Due to the repetitive nature of this algorithm, it can be helpful to organize computations in a chart form, as seen below, to avoid making errors. 0 1 0 1 1 1 2 1 2 1 2 1 2 4 2 4 2 4 1 4 8 3 8 3 8 1 8 16 The conclusion of this computation is that . The exact solution of the differential equation is , so . Thus, the approximation of the Euler method is not very good in this case. However, as the figure shows, its behaviour is qualitatively right. Using other step sizes As suggested in the introduction, the Euler method is more accurate if the step size is smaller. The table below shows the result with different step sizes. The top row corresponds to the example in the previous section, and the second row is illustrated in the figure. step size result of Euler's method error 1 16 38.598 0.25 35.53 19.07 0.1 45.26 9.34 0.05 49.56 5.04 0.025 51.98 2.62 0.0125 53.26 1.34 The error recorded in the last column of the table is the difference between the exact solution at and the Euler approximation. In the bottom of the table, the step size is half the step size in the previous row, and the error is also approximately half the error in the previous row. This suggests that the error is roughly proportional to the step size, at least for fairly small values of the step size. This is true in general, also for other equations; see the section Global truncation error for more details. Other methods, such as the midpoint method also illustrated in the figures, behave more favourably: the error of the midpoint method is roughly proportional to the square of the step size. For this reason, the Euler method is said to be a first-order method, while the midpoint method is second order. We can extrapolate from the above table that the step size needed to get an answer that is correct to three decimal places is approximately 0.00001, meaning that we need 400,000 steps. This large number of steps entails a high computational cost. For this reason, people usually employ alternative, higher-order methods such as Runge–Kutta methods or linear multistep methods, especially if a high accuracy is desired. The Euler method can be derived in a number of ways. Firstly, there is the geometrical description mentioned above. Another possibility is to consider the Taylor expansion of the function around : The differential equation states that . If this is substituted in the Taylor expansion and the quadratic and higher-order terms are ignored, the Euler method arises. The Taylor expansion is used below to analyze the error committed by the Euler method, and it can be extended to produce Runge–Kutta methods. A closely related derivation is to substitute the forward finite difference formula for the derivative, Finally, one can integrate the differential equation from to and apply the fundamental theorem of calculus to get: Now approximate the integral by the left-hand rectangle method (with only one rectangle): Local truncation error The local truncation error of the Euler method is error made in a single step. It is the difference between the numerical solution after one step, , and the exact solution at time . The numerical solution is given by For the exact solution, we use the Taylor expansion mentioned in the section Derivation above: The local truncation error (LTE) introduced by the Euler method is given by the difference between these equations: This result is valid if has a bounded third derivative. This shows that for small , the local truncation error is approximately proportional to . This makes the Euler method less accurate (for small ) than other higher-order techniques such as Runge-Kutta methods and linear multistep methods, for which the local truncation error is proportial to a higher power of the step size. A slightly different formulation for the local truncation error can be obtained by using the Lagrange form for the remainder term in Taylor's theorem. If has a continuous second derivative, then there exists a such that In the above expressions for the error, the second derivative of the unknown exact solution can be replaced by an expression involving the right-hand side of the differential equation. Indeed, it follows from the equation that Global truncation error The global truncation error is the error at a fixed time , after however many steps the methods needs to take to reach that time from the initial time. The global truncation error is the cumulative effect of the local truncation errors committed in each step. The number of steps is easily determined to be , which is proportional to , and the error committed in each step is proportional to (see the previous section). Thus, it is to be expected that the global truncation error will be proportional to . This intuitive reasoning can be made precise. If the solution has a bounded second derivative and is Lipschitz continuous in its second argument, then the global truncation error (GTE) is bounded by where is an upper bound on the second derivative of on the given interval and is the Lipschitz constant of . The precise form of this bound of little practical importance, as in most cases the bound vastly overestimates the actual error committed by the Euler method. What is important is that it shows that the global truncation error is (approximately) proportional to . For this reason, the Euler method is said to be first order. Numerical stability The Euler method can also be numerically unstable, especially for stiff equations, meaning that the numerical solution grows very large for equations where the exact solution does not. This can be illustrated using the linear equation The exact solution is , which decays to zero as . However, if the Euler method is applied to this equation with step size , then the numerical solution is qualitatively wrong: it oscillates and grows (see the figure). This is what it means to be unstable. If a smaller step size is used, for instance , then the numerical solution does decay to zero. If the Euler method is applied to the linear equation , then the numerical solution is unstable if the product is outside the region illustrated on the right. This region is called the (linear) instability region. In the example, equals −2.3, so if then which is outside the stability region, and thus the numerical solution is unstable. This limitation—along with its slow convergence of error with h—means that the Euler method is not often used, except as a simple example of numerical integration. Rounding errors The discussion up to now has ignored the consequences of rounding error. In step n of the Euler method, the rounding error is roughly of the magnitude εyn where ε is the machine epsilon. Assuming that the rounding errors are all of approximately the same size, the combined rounding error in N steps is roughly Nεy0 if all errors points in the same direction. Since the number of steps is inversely proportional to the step size h, the total rounding error is proportional to ε / h. In reality, however, it is extremely unlikely that all rounding errors point in the same direction. If instead it is assumed that the rounding errors are independent rounding variables, then the total rounding error is proportional to . Thus, for extremely small values of the step size, the truncation error will be small but the effect of rounding error may be big. Most of the effect of rounding error can be easily avoided if compensated summation is used in the formula for the Euler method. Modifications and extensions A simple modification of the Euler method which eliminates the stability problems noted in the previous section is the backward Euler method: This differs from the (standard, or forward) Euler method in that the function is evaluated at the end point of the step, instead of the starting point. The backward Euler method is an implicit method, meaning that the formula for the backward Euler method has on both sides, so when applying the backward Euler method we have to solve an equation. This makes the implementation more costly. More complicated methods can achieve a higher order (and more accuracy). One possibility is to use more function evaluations. This is illustrated by the midpoint method which is already mentioned in this article: This leads to the family of Runge–Kutta methods. The other possibility is to use more past values, as illustrated by the two-step Adams–Bashforth method: This leads to the family of linear multistep methods. See also - Numerical methods for ordinary differential equations - For numerical methods for calculating definite integrals, see Numerical integration - Gradient descent similarly uses finite steps, here to find minima of functions - Dynamic errors of numerical methods of ODE discretization - Butcher 2003, p. 45; Hairer, Nørsett & Wanner 1993, p. 35 - Atkinson 1989, p. 342; Butcher 2003, p. 60 - Butcher 2003, p. 45; Hairer, Nørsett & Wanner 1993, p. 36 - Butcher 2003, p. 3; Hairer, Nørsett & Wanner 1993, p. 2 - See also Atkinson 1989, p. 344 - Hairer, Nørsett & Wanner 1993, p. 40 - Atkinson 1989, p. 342; Hairer, Nørsett & Wanner 1993, p. 36 - Atkinson 1989, p. 342 - Atkinson 1989, p. 343 - Butcher 2003, p. 60 - Atkinson 1989, p. 342 - Stoer & Bulirsch 2002, p. 474 - Atkinson 1989, p. 344 - Butcher 2003, p. 49 - Atkinson 1989, p. 346; Lakoba 2012, equation (1.16) - Iserles 1996, p. 7 - Butcher 2003, p. 63 - Butcher 2003, p. 70; Iserles 1996, p. 57 - Butcher 2003, pp. 74–75 - Butcher 2003, pp. 75–78 - Atkinson, Kendall A. (1989), An Introduction to Numerical Analysis (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-50023-0. - Ascher, Uri M.; Petzold, Linda R. (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-412-8. - Butcher, John C. (2003), Numerical Methods for Ordinary Differential Equations, New York: John Wiley & Sons, ISBN 978-0-471-96758-3. - Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0. - Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN 978-0-521-55655-2 - Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-95452-3. - Lakoba, Taras I. (2012), Simple Euler method and its modifications (Lecture notes for MATH334, University of Vermont), retrieved 29 February 2012. |The Wikibook Calculus has a page on the topic of: Euler's Method| - Euler's Method for O.D.E.'s, by John H. Matthews, California State University at Fullerton.
http://en.wikipedia.org/wiki/Euler_method
13
58
One common problem in data visualisation is the representation of two sets of data, which have a common subset of elements: a percentage of their contents which are present in both. The obvious solution to this issue is to draw a Venn diagram: two overlapping circles, where the overlap represents the percentage of common elements. The main issue in drawing a Venn diagram is, given the percentage of overlap, determining the placement of the circles such that it visually matches the stated overlap. Once the circle dimensions and placements have been worked out, the image manipulation is relatively straightforward. In this article, I'll be using PHP to demonstrate the implementation, and the imagick interface to ImageMagick in order to draw and output the image. Geometry: Overlapping circles Mathematically, two overlapping circles will cross each other at two points: a line between these two points is a chord of the circles, and the area contained within each chord segment by this line is 50% of the total overlap. In order to correctly place the circles, it's important to find out what x and h are in the above diagram; knowing these values will allow for easy calculation of the horizontal positions. By using the standard formulae for area and angle of a circle segment, the following equation can be obtained. By solving this equation, we can get the length of the sagitta, x. The problem presented by this equation, however, is that it cannot be solved analytically by working with the equation terms. A numerical approach will need to be used, to find a solution. The Newton-Raphson iterative method One of the most common numerical algorithms for solving an equation is the Newton-Raphson method, also known as Newton's method. It uses the gradient of the function at a particular point, to guess the next point. By picking a good starting point, it's possible to quickly narrow down a solution to the function (the point at which it crosses the x-axis). As can be seen in the above figure, the algorithm follows the gradient line down to the x-axis, and uses the crossing point there as its next guess for the solution. Taking another gradient from the function at that point, the algorithm homes in on the solution within (in the above case) 4 or 5 iterations. When used on a formula, the gradient is represented by the differential of the formula in question; for the sagitta length formula, the differential is: With both formulae to hand (the function itself and the differential), the iteration process is a simple calculation: This calculation can be repeated until the answer is close to the expected solution: in other words, when successive iterations don't result in a significant change to the answer. The definition of "significant change" depends on the problem: in this case, I'll be using "the same to four decimal places". As an example, suppose that the Venn diagram in Figure 1 is being generated: a diagram showing 20% overlap, where each circle has a radius of 150 pixels. Plugging these values into the Newton-Raphson solver shows the following values for each iteration. Iteration results for an example diagram As can be seen, the solver quickly converges on the answer for the length x. From here, h can be calculated as the difference between x and the radius, and the angle θ as: Implementing the solver In PHP, the solver can be implemented by defining the sagitta formula and its differential as two functions, and using a recursive function to run through their values. The following implementation contains a "safety valve" for the solver, for the general case where the equations may cause a divergence if the solver starts at x=0. In the case of this equation, the safety valve is unnecessary, since the algorithm will always converge if it starts at 0; it is included below for completeness. Iterative solver implementation Drawing the Venn diagram Using PHP and imagick, the Venn diagram can be drawn quickly and efficiently based on the value for x obtained above. There are, however, a few issues that must be resolved: - Circle placement: In order to plot a circle in imagick, its centre coordinate must be given, and this must be calculated horizontally for both circles. For the left-hand circle, this is simply one radius in from the left of the image. The right-hand circle would be three radii from the left edge, if there were no overlap; from Figure 2, it can be seen that there is 2h of overlap, so this must be subtracted from the horizontal coordinate of the right-hand circle. - Intersection: The two circles can be drawn easily, but the circle segments representing the intersection could cause difficulty. Fortunately, imagickprovides a construct for drawing an ellipse segment: given two angles, it will plot the arc and chord between them, and fill the space in the "fill colour" defined beforehand. - Image size: As with the circle placement, the horizontal dimension of the image is lower than may be expected. Without overlap, the image would be as wide as both circle diameters put together; the overlap of 2h must again be subtracted from this if the image is not to be too wide. Having taken these issues into account, the following code will generate the Venn diagram given x. Image rendering implementation The above code results in Figure 1. Issues and enhancements One problem that remains with this implementation is the range of overlap percentages. If an overlap of less than 0% is given (if, in other words, the sets don't overlap), the equations above result in complex roots and PHP crashes while attempting to calculate them. Similarly, if the overlap is specified as more than 100%, this should reverse the positions of the sets in the Venn diagram; instead, the equations will produce a small section of one circle which is rendered as all intersection. A simple range check on the overlap percentage can alleviate these issues, and prevent them from being passed through to the script. Another limitation is the inherent tie of two sets that is implied by this script; it is not possible to specify an overlap between three sets using this model. The geometry to allow for three intersecting circles to be specified is left as an exercise for the reader. Imran Nazar <[email protected]>, May 2010. Article dated: 20th May 2010
http://imrannazar.com/Venn-Diagrams-in-PHP-and-imagick
13
56
From Wikipedia, the free encyclopedia A wetland is a land area that is saturated with water, either permanently or seasonally, such that it takes on the characteristics of a distinct ecosystem. Primarily, the factor that distinguishes wetlands from other land forms or water bodies is the characteristic vegetation that is adapted to its unique soil conditions: Wetlands consist primarily of hydric soil, which supports aquatic plants. Wetlands play a number of roles in the environment, principally water purification, flood control, and shoreline stability. Wetlands are also considered the most biologically diverse of all ecosystems, serving as home to a wide range of plant and animal life. Wetlands occur naturally on every continent except Antarctica. They can also be constructed artificially as a water management tool, which may play a role in the developing field of water-sensitive urban design. The largest wetlands in the world include the Amazon River basin and the West Siberian Plain. Another large wetland is the Pantanal, which straddles Brazil, Bolivia, and Paraguay in South America. The UN Millennium Ecosystem Assessment determined that environmental degradation is more prominent within wetland systems than any other ecosystem on Earth. International conservation efforts are being used in conjunction with the development of rapid assessment tools to inform people about wetland issues. A patch of land that develops pools of water after a rain storm would not be considered a "wetland," even though the land is wet. Wetlands have unique characteristics: they are generally distinguished from other water bodies or landforms based on their water level and on the types of plants that thrive within them. Specifically, wetlands are characterized as having a water table that stands at or near the land surface for a long enough period each year to support aquatic plants. Wetlands have also been described as ecotones, providing a transition between dry land and water bodies. Mitsch and Gosselink write that wetlands exist "...at the interface between truly terrestrial ecosystems and aquatic systems, making them inherently different from each other, yet highly dependent on both." In environmental decision-making, there are subsets of definitions that are agreed upon to make regulatory and policy decisions. A wetland is "an ecosystem that arises when inundation by water produces soils dominated by anaerobic processes, which, in turn, forces the biota, particularly rooted plants, to adapt to flooding." There are four main kinds of wetlands -- marsh, swamp, bog and fen (bogs and fens both being types of mires). Some experts also recognize wet meadows and aquatic ecosystems as additional wetland types. The largest wetlands in the world include the swamp forests of the Amazon and the peatlands of Siberia. Ramsar Convention definition - Article 1.1: "...wetlands are areas of marsh, fen, peatland or water, whether natural or artificial, permanent or temporary, with water that is static or flowing, fresh, brackish or salt, including areas of marine water the depth of which at low tide does not exceed six metres." - Article 2.1: "[Wetlands] may incorporate riparian and coastal zones adjacent to the wetlands, and islands or bodies of marine water deeper than six metres at low tide lying within the wetlands." Although the general definition given above applies around the world, each county and region tends to have its own definition for legal purposes. For example, in the United States, wetlands are defined as "those areas that are inundated or saturated by surface or groundwater at a frequency and duration sufficient to support, and that under normal circumstances do support, a prevalence of vegetation typically adapted for life in saturated soil conditions. Wetlands generally include swamps, marshes, bogs and similar areas". This definition has been used in the enforcement of the Clean Water Act. Some US states, such as Massachusetts and New York, have separate definitions that may differ from the federal government. The most important factor producing wetlands is flooding. The duration of flooding determines whether the resulting wetland has aquatic, marsh or swamp vegetation. Other important factors include fertility, natural disturbance, competition, herbivory, burial and salinity. When peat accumulates, bogs and swamps arise. Wetlands vary widely due to local and regional differences in topography, hydrology, vegetation, and other factors, including human involvement. Wetlands can be divided into two main classes: tidal and non-tidal areas. Wetland hydrology is associated with the spatial and temporal dispersion, flow, and physiochemical attributes of surface and ground water in its reservoirs. Based on hydrology, wetlands can be categorized as riverine (associated with streams), lacustrine (associated with lakes and reservoirs), and palustrine (isolated). Sources of hydrological flows into wetlands are predominately precipitation, surface water, and ground water. Water flows out of wetlands by evapotranspiration, surface runoff, and sub-surface water outflow. Hydrodynamics (the movement of water through and from a wetland) affects hydroperiods (temporal fluctuations in water levels) by controlling the water balance and water storage within a wetland. Landscape characteristics control wetland hydrology and hydrochemistry. The O2 and CO2 concentrations of water depend on temperature and atmospheric pressure. Hydrochemistry within wetlands is determined by the pH, salinity, nutrients, conductivity, soil composition, hardness, and the sources of water. Water chemistry of wetlands varies across landscapes and climatic regions. Wetlands are generally minerotrophic with the exception of bogs. Bogs receive their water from the atmosphere and therefore their water has low mineral ionic composition because ground water has a higher concentration of dissolved nutrients and minerals in comparison to precipitation. The water chemistry of fens ranges from low pH and low minerals to alkaline with high accumulation of calcium and magnesium because they acquire their water from precipitation as well as ground water. Role of salinity Salinity has a strong influence on wetland water chemistry, particularly in wetlands along the coast. In non-riverine wetlands, natural salinity is regulated by interactions between ground and surface water, which may be influenced by human activity. Carbon is the major nutrient cycled within wetlands. Most nutrients, such as sulfur, phosphorus, carbon, and nitrogen are found within the soil of wetlands. Anaerobic and aerobic respiration in the soil influences the nutrient cycling of carbon, hydrogen, oxygen, and nitrogen, and the solubility of phosphorus thus contributing to the chemical variations in its water. Wetlands with low pH and saline conductivity may reflect the presence of acid sulfates and wetlands with average salinity levels can be heavily influenced by calcium or magnesium. Biogeochemical processes in wetlands are determined by soils with low redox potential. The biota of a wetland system includes its vegetation zones and structure as well as animal populations. The most important factor affecting the biota is the duration of flooding. Other important factors include fertility and salinity. In fens, species are highly dependent on water chemistry. The chemistry of water flowing into wetlands depends on the source of water and the geological material in which it flows through as well as the nutrients discharged from organic matter in the soils and plants at higher elevations in slope wetlands. Biota may vary within a wetland due to season or recent flood regimes. There are four main groups of hydrophytes that found in wetland systems throughout the world. Submerged water plants. This type of vegetation is found completely underwater. Submerged wetland vegetation can grow in saline and fresh-water conditions. Some species have underwater flowers, while others have long stems to allow the flowers to reach the surface. Submerged species provide a food source for native fauna, habitat for invertrebrates, and also possess filtration capabilities. Examples include seagrasses and eelgrass. Floating water plants. Floating vegetation is usually small although it may take up a large surface area in a wetland system. These hydrophytes have small roots and are only found in slow-moving water with rich-nutrient level water Floating aquatic plants are a food resource for avian species. Examples include water lilies lily pad and duckweed. Emergent water plants. Emergent water plants can be seen above the surface of the water but whose roots are completely submerged. Many have aerenchyma to transmit oxygen from the atmosphere to their roots. Extensive areas of emergent plants are usually termed marsh. Examples include cattails (Typha) and arrow arum (Peltandra virginica). Surrounding trees and shrubs. Forested wetlands are generally known as swamps. The upper level of these swamps is determined by high water levels, which are negatively affected by dams. Some swamps can be dominated by a single species, such as silver maple swamps around the Great Lakes. Others, like those of the Amazon Basin, have large numbers of different tree species. Examples include cypress (Taxodium) and mangrove. Fish: Fish are more dependent on wetland ecosystems than any other type of habitat. 75% of the United States’ commercial fish and shellfish stocks depend solely on estuaries to survive. Tropical fish species need mangroves for critical hatchery and nursery grounds and the coral reef system for food. Amphibians: Frogs are the most crucial amphibian species in wetland systems. Frogs need both terrestrial and aquatic habitats in which to reproduce and feed. While tadpoles control algal populations, adult frogs forage on insects. Frogs are used as an indicator of ecosystem health due to their thin skin which absorbs both nutrient and toxins from the surrounding environment resulting in an above average extinction rate in unfavorable and polluted environmental conditions. Reptiles: Alligators and crocodiles are two common reptilian species. Alligators are found in fresh water along with the fresh water species of the crocodile. The saltwater crocodile is found in estuaries and mangroves and can be seen in the coastline bordering the Great Barrier Reef in Australia. The Florida Everglades is the only place in the world where both crocodiles and alligators co-exist. Snakes, lizards, goannas, and turtles also can be seen throughout wetlands. Snapping turtles are one of the many kinds of turtles found in wetlands. Mammals: Multiple small mammals as well as large herbivore and apex species such as the Florida Panther live within and around wetlands. The wetland ecosystem attracts mammals due to its prominent seed sources, invertebrate populations, and numbers of small reptiles and amphibians. Monotremes: The platypus (Ornithorhynchus anatinus) is found in eastern Australia living in freshwater rivers or lakes, and much like the beaver creates dams, create burrows for shelter and protection. The platypus swims through the use of webbed feet. Platypuses feed on insect larvae, worms, or other freshwater insects hunting mainly by night by the use of their bill. They turn up mud on the bottom of the lake or river, and with the help of the electroreceptors located on the bill, unearth insects and freshwater insects. The platypus stores their findings in special pouches behind their bill and consumes its prey upon returning to the surface. Insects and invertebrates: These species total more than half of the 100,000 known animal species in wetlands. Insects and invertebrates can be submerged in the water or soil, on the surface, and in the atmosphere. Algae are diverse water plants that can vary in size, color, and shape. Algae occur naturally in habitats such as inland lakes, inter-tidal zones, and damp soil and provide a dedicated food source for animals, fish, and invertebrates. There are three main groups of algae: Plankton are algae which are microscopic, free-floating algae. This algae is so tiny that on average, if fifty of these microscopic algae were lined up end-to-end, it would only measure one millimetre. Plankton are the basis of the food web and are responsible for primary production in the ocean using photosynthesis to make food. Filamentous algae are long strands of algae cells that form floating mats. Chara and Nitella algae are upright algae that look like a submerged plant with roots. Temperatures vary greatly depending on the location of the wetland. Many of the world's wetlands are in temperate zones (midway between the North or South Pole and the equator). In these zones, summers are warm and winters are cold, but temperatures are not extreme. However, wetlands found in the tropic zone, around the equator, are warm year round. Wetlands on the Arabian Peninsula, for example, can reach 50 °C (122 °F) and would therefore be subject to rapid evaporation. In northeastern Siberia, which has a polar climate, wetland temperatures can be as low as −50 °C (−58 °F). In a moderate zone, such as the Gulf of Mexico, a typical temperature might be 11 °C (51 °F). Wetlands are also located in every climatic zone. The amount of rainfall a wetland receives varies widely according to its area. Wetlands in Wales, Scotland, and Western Ireland typically receive about 1500 mm (or 60 in) per year. In some places in Southeast Asia, where heavy rains occur, they can receive up to 10,000 mm (about 200 in). In the northern areas of North America, wetlands exist where as little as 180 mm (7 inches) of rain fall each year. - Perennial systems - Seasonal systems - Episodic (periodic or intermittent) system of the down - Surface flow may occur in some segments, with subsurface flow in other segments - Ephemeral (short-lived) systems - Migratory species - Unsustainable water use - Ecosystem Stress Peatswamps of South-east Asia In Southeast Asia, peatswamp forests and soils are being drained, burnt, mined, and overgrazed contributing severely to climate change. As a result of peat drainage, the organic carbon that was built up over thousands of years and is normally under water, is suddenly exposed to the air. It decomposes and turns into carbon dioxide (CO2), which is released into the atmosphere. Peat fires cause the same process and in addition create enormous clouds of smoke that cross international borders, such as happen every year in Southeast Asia. Peatlands form only 3% of all the world’s land area, however, their degradation equals 7% of all fossil fuel CO2 emissions. Through the building of dams, Wetlands International is halting the drainage of peatlands in Southeast Asia, hoping to mitigate CO2 emissions. Concurrent wetland restoration techniques include reforestation with native tree species as well as setting up community fire brigades. This sustainable approach can be seen in Central Kalimantan and Sumatra, Indonesia. Concerns are developing over certain aspects of farm fishing, which uses natural waterways to harvest fish for human consumption and pharmaceuticals. This practice has become especially popular within Asia and the South Pacific. Its impact downstream upon much larger water ways has negatively influenced many small island developing states. ||This section needs more links to other articles to help integrate it into the encyclopedia. (December 2012)| The function of natural wetlands can be classified by their ecosystem benefits. United Nations Millennium Ecosystem Assessment and Ramsar Convention found wetlands to be of biosphere significance and societal importance in the following areas: - Flood control - Groundwater replenishment - Shoreline stabilisation and storm protection - Water purification - Reservoirs of biodiversity - Wetland products - Cultural values - Recreation and tourism - Climate change mitigation and adaptation The economic worth of the ecosystem services provided to society by intact, naturally functioning wetlands is frequently much greater than the perceived benefits of converting them to ‘more valuable’ intensive land use – particularly as the profits from unsustainable use often go to relatively few individuals or corporations, rather than being shared by society as a whole.-Ramsar convention Unless otherwise cited, Ecosystem services is based on the following series of references. Major wetland type: floodplain Storage Reservoirs and Flood Protection. The wetland system of floodplains is formed from major rivers downstream from their headwaters. Notable river systems that produce large spans of floodplain include the Nile River (Africa), Mississippi River (USA), Amazon River (South America), Yangtze River (China), Danube River (Central Europe) and Murray-Darling River (Australia). "The floodplains of major rivers act as natural storage reservoirs, enabling excess water to spread out over a wide area, which reduces its depth and speed. Wetlands close to the headwaters of streams and rivers can slow down rainwater runoff and spring snowmelt so that it doesn’t run straight off the land into water courses. This can help prevent sudden, damaging floods downstream.” Human-Impact. Converting wetlands through drainage and development have contributed to the issue of irregular flood control through forced adaption of water channels to narrower corridors due to loss of wetland area. These new channels must manage the same amount of precipitation causing flood peaks to be [higher or deeper] and floodwaters to travel faster. Water management engineering developments in the past century have degraded these wetlands through the construction on artificial embankments. These constructions may be classified as dykes, bunds, levees, weirs, barrages and dams but serve the single purpose of concentrating water into a select source or area. Wetland water sources that were once spread slowly over a large, shallow area are pooled into deep, concentrated locations. Loss of wetland floodplains results in more severe and damaging flooding. Catastrophic human impact in the Mississippi River floodplains was seen in death of several hundred individuals during a levee breach in New Orleans caused by Hurricane Katrina. Ecological catastrophic events from human-made embankments have been noticed along the Yangtze River floodplains after the where the middle of the river has become prone to more frequent and damaging flooding including the loss of riparian vegetation, a 30% loss of the vegetation cover throughout the river’s basin, a doubling of the percentage of the land affected by soil erosion, and a reduction in reservoir capacity through siltation build-up in floodplain lakes. Major wetland type: marsh, swamp, & subterranean karst and cave hydrological systems The surface water which is the water visibly seen in wetland systems only represents a portion of the overall water cycle which also includes atmospheric water and groundwater. Wetland systems are directly linked to groundwater and a crucial regulator of both the quantity and quality of water found below the ground. Wetland systems that are made of permeable sediments like limestone or occur in areas with highly variable and fluctuating water tables especially have a role in groundwater replenishment or water recharge. Sediments that are porous allow water to filter down through the soil and overlying rock into aquifers which are the source of 95% of the world’s drinking water. Wetlands can also act as recharge areas when the surrounding water table is low and as a discharge zone when it is too high. Karst (cave) systems are a unique example of this system and are a connection of underground rivers influenced by rain and other forms of precipitation. These wetland systems are capable of regulating changes in the water table on upwards of 130 metres (430 ft). Human-Impact. Groundwater is an important source of water for drinking and irrigation of crops. Over 1 billion people in Asia and 65% of the public water sources in Europe source 100% of their water from groundwater. Irrigation is a massive use of groundwater with 80% of the world’s groundwater used for agricultural production. Unsustainable abstraction of groundwater has become a major concern. In the Commonwealth of Australia, water licensing is being implemented to control use of the water in major agricultural regions. On a global scale, groundwater deficits and water scarcity is one of the most pressing concerns facing the 21st century. Shoreline stabilisation and storm protection Wetland type: Mangroves, Coral Reefs, Saltmarsh Tidal and inter-tidal wetland systems protect and stabilize coastal zones. Coral reefs provide a protective barrier to coastal shoreline. Mangroves stabilize the coastal zone from the interior and will migrate with the shoreline to remain adjacent to the boundary of the water. The main conservation benefit these systems have against storms and tidal waves is the ability to reduce the speed and height of waves and floodwaters. Human-Impact. The sheer number of people who live and work near the coast is expected to grow immensely over the next 50 years. From an estimated 200 million people that currently live in low-lying coastal regions, the development of urban coastal centers is projected to increase the population by 5 fold within 50 years. The United Kingdom has begun the concept of managed coastal realignment. This management technique provides shoreline protection through restoration of natural wetlands rather than through applied engineering. Wetland Type: Floodplain, Mudflat, Saltmarsh, Mangroves Nutrient Retention. Wetlands cycle both sediments and nutrients balancing terrestrial and aquatic ecosystems. A natural function of wetland vegetation is the up-take and storage of nutrients found in the surrounding soil and water. These nutrients are retained in the system until the plant dies or is harvested by animals or humans. Wetland vegetation productivity is linked to the climate, wetland type, and nutrient availability. The grasses of fertile floodplains such as the Nile produce the highest yield including plants such as Arundo donax(giant reed), Cyperus papyrus (papyrus), Phragmites (reed) and Typha (cattail, bulrush). Sediment Traps. Rainfall run-off is responsible for moving sediment through waterways. These sediments move towards larger and more sizable waterways through a natural process that moves water towards oceans. All types of sediments which may be composed of clay, sand, silt, and rock can be carried into wetland systems through this process. Reedbeds or forests located in wetlands act as physical barriers to slow waterflow and trap sediment. Water purification. Many wetland systems possess biofilters, hydrophytes, and organisms that in addition to nutrient up-take abilities have the capacity to remove toxic substances that have come from pesticides, industrial discharges, and mining activities. The up-take occurs through most parts of the plant including the stems, roots, and leaves . Floating plants can absorb and filter heavy metals. Eichhornia crassipes (water hyacinth), Lemna (duckweed) and Azolla (water fern) store iron and copper commonly found in wastewater. Many fast-growing plants rooted in the soils of wetlands such as Typha (cattail) and Phragmites (reed) also aid in the role of heavy metal up-take. Animals such as the oyster can filter more than 200 liters (53 gallons) of water per day while grazing for food, removing nutrients, suspended sediments, and chemical contaminants in the process. Capacity. The ability of wetland systems to store nutrients and trap sediment is highly efficient and effective but each system has a threshold. An overabundance of nutrient input from fertilizer run-off, sewage effluent, or non-point pollution will cause eutrophication. Upstream erosion from deforestation can overwhelm wetlands making them shrink in size and see dramatic biodiversity loss through excessive sedimentation load. The capacity of wetland vegetation to store heavy metals is affected by waterflow, number of hectares (acres), climate, and type of plant. Human-Impact. Introduced hydrophytes in different wetland systems can have devastating results. The introduction of water hyacinth, a native plant of South America into Lake Victoria in East Africa as well as duckweed into non-native areas of Queensland, Australia, have overtaken entire wetland systems suffocating the ecosystem due to their phenomenal growth rate and ability to float and grow on the surface of the water. ||This section needs more links to other articles to help integrate it into the encyclopedia. (December 2012)| The function of most natural wetland systems is not to manage to wastewater, however, their high potential for the filtering and the treatment of pollutants has been recognized by environmental engineers that specialize in the area of wastewater treatment. These constructed artificial wetland systems are highly controlled environments that intend to mimic the occurrences of soil, flora, and microorganisms in natural wetlands to aid in treating wastewater effluent. Artificial wetlands provide the ability to experiment with flow regimes, micro-biotic composition, and flora in order to produce the most efficient treatment process. Other advantages are the control of retention times and hydraulic channels. The most important factors of constructed wetlands are the water flow processes combined with plant growth. Constructed wetland systems can be surface flow systems with only free-floating macrophytes, floating-leaved macrophytes, or submerged macrophytes; however, typical free water surface systems are usually constructed with emergent macrophytes. Constructed wetlands can be adapted to treat raw sewage, secondary domestic sludge, enhance water quality of oxidation ponds’ discharge, storm waters, mining waste, and industrial and agricultural waste effluents. The Urrbrae Wetland in Australia was constructed for urban flood control and environmental education. International wastewater management programs can be seen from Kolkata (Calcutta), India to Arcata, California, USA. Kolkata’s constructed wetland. Kolkata is an example of how constructed wetlands are being utilized in developing countries. Using the purification capacity of wetlands, the Indian city of Kolkata (Calcutta) has pioneered a system of sewage disposal that is both efficient and environmentally friendly. Built to house one million people, Kolkata is now home to over 10 million, many living in slums. But the 8,000-hectare East Kolkata Wetlands Ramsar Site, a patchwork of tree-fringed canals, vegetable plots, rice paddies and fish ponds – and the 20,000 people that work in them – daily transform one-third of the city’s sewage and most of its domestic refuse into a rich harvest of fish and fresh vegetables. For example, the Mudially Fishermen’s Cooperative Society is a collective of 300 families that lease 70 hectares into which wastewater from the city is released. Through a series of natural treatment processes – including the use of Eichhornia crassipes and other plants for absorbing oil, grease and heavy metals – the Cooperative has turned the area into a thriving fish farm and nature park. In 2005/06, the Cooperative sold fish worth over US$135,000 and shared income of more than US$55,000 among its members.[not in citation given] Reservoirs of biodiversity Wetland systems' rich biodiversity is becoming a focal point at International Treaty Conventions and within the World Wildlife Fund organization due to the high number of species present in wetlands, the small global geographic area of wetlands, the number of species which are endemic to wetlands, and the high productivity of wetland systems. Hundred of thousands of animal species, 20,000 of them vertebrates, are living in wetland systems. The discovery rate of fresh water fish is at 200 new species per year. Biodiverse river basins. The Amazon holds 3,000 species of fresh water fish species within the boundaries of its basin whose function it is to disperse the seeds of trees. One of its key species, the Piramutaba catfish, Brachyplatystoma vaillantii, migrates more than 3,300 km (2,051 miles) from its nursery grounds near the mouth of the Amazon River to its spawning grounds in Andean tributaries (400 m or 437 yards above sea level) distributing plants seed along the route. Productive intertidal zones. Intertidal mudflats have a similar productivity even while possessing a low number of species. The abundance of invertebrates found within the mud are a food source for migratory waterfowl. Critical life-stage habitat. Mudflats, saltmarshes, mangroves, and seagrass beds contain bother species richness and productivity, and are home to important nursery areas for many commercial fish stocks. Genetic Diversity. Many species in wetland systems are unique due to the long period of time that the ecosystem has been physically isolated from other aquatic sources. The number of endemic species in Lake Baikal in Russia classifies it as a hotspot for biodiversity and one of the most biodiverse wetlands in the entire world. Lake Baikal. Evidence from a research study by Mazepova et al. suggest that the number of crustacean species endemic to Baikal Lake (>690 species and subspecies) exceeds the number of the same groups of animals inhabiting all the fresh water bodies of Eurasia together. Its 150 species of free-living Platyhelminthes alone is analogous to the entire number in all of Eastern Siberia. The 34 species and subspecies number of Baikal sculpins is more than twice the number of the analogous fauna that inhabits Eurasia. One of most exciting discoveries was made by A.V. Shoshin who registered about 300 species of free-living nematodes using only 6 near-shore sampling localities in the Southern Baikal. "If we will take into consideration, that about 60 % of the animals can be found nowhere else except Baikal, it may be assumed that the lake may be the biodiversity center of the Eurasian continent." Human Impact. Biodiversity loss occurs in wetland systems through land use changes, habitat destruction, pollution, exploitation of resources, and invasive species. Vulnerable, threatened, and endangered species number at 17% of waterfowl, 38% of fresh-water dependent mammals, 33% of fresh water fish, 26% of fresh water amphibians, 72% of fresh water turtles, 86% of marine turtles, 43% of crocodilians and 27% of coral reef-building species. The impact of maintaining biodiversity is seen at the local level through job creation, sustainability, and community productivity. A good example is the Lower Mekong basin which runs through Cambodia, Laos, and Vietnam. Supporting over 55 million people, the sustainability of the region is enhanced through wildlife tours. The US state of Florida has estimated that US$ 1.6 billion was generated in state revenue from recreational activities associated with wildlife. Sustainable harvesting for medicinal remedies found in native wetlands plants in the Caribbean and Australia include the Red Mangrove Rhizophora mangle which possesses antibacterial, wound-healing, anti-ulcer effects, and antioxidant properties. Wetland systems naturally produce an array of vegetation and other ecological products that can harvested for personal and commercial use. The most significant of these is fish which have all or part of their life-cycle occur within a wetland system. Fresh and saltwater fish are the main source of protein for one billion people and comprise 15% of an additional two billion people’s diets. In addition, fish generate a fishing industry that provides 80% of the income and employment to residents in developing countries. Another food staple found in wetland systems is rice, a popular grain that is consumed at the rate of 1/5 of the total global calorie count. In Bangladesh, Cambodia and Vietnam, where rice paddies are predominant on the landscape, rice consumption reach 70%. Food converted to sweeteners and carbohydrates include the sago palm of Asia and Africa (cooking oil), the nipa palm of Asia (sugar, vinegar, alcohol, and fodder) and honey collection from mangroves. More than supplemental dietary intake, this produce sustains entire villages. Coastal Thailand villages earn the key portion of their income from sugar production while the country of Cuba relocates more than 30,000 hives each year to track the seasonal flowering of the mangrove Avicennia. Other mangrove- derived products. • fuelwood • salt (produced by evaporating seawater) • animal fodder • traditional medicines (e.g. from mangrove bark) • fibers for textiles • dyes and tannins Human Impact. Over-fishing is the major problem for sustainable use of wetlands. The field of aquaculture within the fisheries industries is eliminating mass areas of wetland systems through practices seen such as in the shrimp farming industry's destruction of mangroves. Aquaculture is continuing to develop rapidly throughout the Asia-Pacific region specifically in China with world holdings in Asia equal to 90% of the total number of aquaculture farms and 80% of its global value. Threats to rice fields mainly stem from inappropriate water management, introduction of invasive alien species, agricultural fertilizers, pesticides, and land use changes. Industrial-scale production of palm oil threatens the biodiversity of wetland ecosystems in parts of south-east Asia, Africa, and other developing countries. Exploitation can occur at the community level as is sometimes seen throughout coastal villages of Southern Thailand where each resident may obtain for themselves every consumable of the mangrove forest (fuelwood, timber, honey, resins, crab, and shellfish) which then becomes threatened through increasing population and continual harvest. Other issues that occur on a global level include an uneven contribution to climate change, point and non-point pollution, and air and water quality issues due to destructive wetland practices. Wetlands and climate change All references within this section were obtained from the following source. “Low water and occasional drying of the wetland bottom during droughts (dry marsh phase) stimulate plant recruitment from a diverse seed bank and increase productivity by mobilizing nutrients. In contrast, high water during deluges (lake marsh phase) causes turnover in plant populations and creates greater interspersion of element cover and open water, but lowers overall productivity. During a cover cycle that ranges from open water to complete vegetation cover, annual net primary productivity may vary 20-fold.” Mitigation and adaption Wetlands perform two important functions in relation to climate change. They have mitigation effects through their ability to sink carbon, and adaptation effects through their ability to store and regulate water. Wetlands have historically been the victim of large draining efforts for real estate development, or flooding for use as recreational lakes. Since the 1970s, more focus has been put on preserving wetlands for their natural function yet by 1993 half the world's wetlands had been drained. Wetlands provide a valuable flood control function. Wetlands are very effective at filtering and cleaning water pollution, (often from agricultural runoff from the farms that replaced the wetlands in the first place). To replace these wetland ecosystem services enormous amounts of money had to be spent on water purification plants, along with the remediation measures for controlling floods: dam and levee construction. In order to produce sustainable wetlands, short-term, private-sector profits need to come secondary to global equity. Decision-makers must valuate wetland type, provided ecosystem service, long-term benefit, and current subsidies inflating valuation on either the private or public sector side. Analysis using the impact of hurricanes versus storm protection features projected wetland valuation at US$33,000/hectare/year. Balancing wetland conservation with the needs of people Wetlands are vital ecosystems that provide livelihoods for the millions of people who live in and around them. The Millennium Development Goals (MDGs) called for different sectors to join forces to secure wetland environments in the context of sustainable development and improving human wellbeing. A three-year project carried out by Wetlands International in partnership with the International Water Management Institute found that it is possible to conserve wetlands while improving the livelihoods of people living among them. Case studies conducted in Malawi and Zambia looked at how dambos – wet, grassy valleys or depressions where water seeps to the surface – can be farmed sustainably to improve livelihoods. Mismanaged or overused dambos often become degraded, however, using a knowledge exchange between local farmers and environmental managers, a protocol was developed using soil and water management practices. Project outcomes included a high yield of crops, development of sustainable farming techniques, and adequate water management generating enough water for use as irrigation. Before the project, there were cases where people had died from starvation due to food shortages. By the end of it, many more people had access to enough water to grow vegetables. A key achievement was that villagers had secure food supplies during long, dry months. They also benefited in other ways: nutrition was improved by growing a wider range of crops, and villagers could also invest in health and education by selling produce and saving money. The Convention on Wetlands of International Importance, especially as Waterfowl Habitat, or Ramsar Convention, is an international treaty designed to address global concerns regarding wetland loss and degradation. The primary purposes of the treaty are to list wetlands of international importance and to promote their wise use, with the ultimate goal of preserving the world's wetlands. Methods include restricting access to the majority portion of wetland areas, as well as educating the public to combat the misconception that wetlands are wastelands. The Convention works closely with five International Organisation Partners. These are: Birdlife International, IUCN, International Water Management Institute, Wetlands International and World Wide Fund for Nature. The partners provide technical expertise, help conduct or facilitate field studies and provide financial support. The IOPs also participate regularly as observers in all meetings of the Conference of the Parties and the Standing Committee and as full members of the Scientific and Technical Review Panel. The value of a wetland system to the earth and to humankind is one of the most important valuations that can be computed for sustainable development. A guideline involving assessing a wetland, keeping inventories of known wetlands, and monitoring the same wetlands over time is the current process that is used to educate environmental decision-makers such as governments on the importance of wetland protection and conservation. - Constructed Wetlands take 10–100 years to fully resemble the vegetative composition of a natural wetland. - Artificial wetlands do not have hydric soil. The soil has very low levels of organic carbon and total nitrogen compared to natural wetland systems. - Organic matter can be added to degraded natural wetlands to help restore their productivity before the wetland is destroyed. Five steps to assessing a wetland - Collect general biodiversity data in order to inventory and prioritize wetland species, communities and ecosystems. Obtain baseline biodiversity information for a given area. - Gather information on the status of a focus or target species such as threatened species. Collect data pertaining to the conservation of a specific species. - Gain information on the effects of human or natural disturbance (changes) on a given area or species. - Gather information that is indicative of the general ecosystem health or condition of a specific wetland ecosystem. - Determine the potential for sustainable use of biological resources in a particular wetland ecosystem. Developing a global inventory of wetlands has proven to be a large and difficult undertaking. Current efforts are based on available data, but both classification and spatial resolution have proven to be inadequate for regional or site-specific environmental management decision-making. It is difficult to identify small, long, and narrow wetlands within the landscape. Many of today’s remote sensing satellites do not have sufficient spatial and spectral resolution to monitor wetland conditions, although multispectral IKONOS and QuickBird data may offer improved spatial resolutions once it is 4 m or higher. Majority of the pixels are just mixtures of several plant species or vegetation types and are difficult to isolate which translates into an inability to classify the vegetation that defines the wetland. Improved remote sensing information, coupled with good knowledge domain on wetlands will facilitate expanded efforts in wetland monitoring and mapping. This will also be extremely important because we expect to see major shifts in species composition due to both anthropogenic land use and natural changes in the environment caused by climate change. A wetland system needs to be monitored over time to in order to assess whether it is functioning at an ecologically sustainable level or whether it is becoming degraded. Degraded wetlands will suffer a loss in water quality, a high number of threatened and endangered species, and poor soil conditions. Due to the large size of wetlands, mapping is an effective tool to monitor wetlands. There are many remote sensing methods that can be used to map wetlands. Remote-sensing technology permits the acquisition of timely digital data on a repetitive basis. This repeat coverage allows wetlands, as well as the adjacent land-cover and land-use types, to be monitored seasonally and/or annually. Using digital data provides a standardized data-collection procedure and an opportunity for data integration within a geographic information system. Traditionally, Landsat 5 Thematic Mapper (TM), Landsat 7 Enhanced Thematic Mapper Plus (ETM + ), and the SPOT 4 and 5 satellite systems have been used for this purpose. More recently, however, multispectral IKONOS and QuickBird data, with spatial resolutions of 4 m by 4 m and 2.44 m by 2.44 m, respectively, have been shown to be excellent sources of data when mapping and monitoring smaller wetland habitats and vegetation communities. For example, Detroit Lakes Wetland Management District assessed area wetlands in Michigan, USA using remote sensing. Through using this technology, satellite images were taken over a large geographic area and extended period. In addition, using this technique was less costly and time-consuming compared to the older method using visual interpretation of aerial photographs. In comparison, most aerial photographs also require experienced interpreters to extract information based on structure and texture while the interpretation of remote sensing data only requires analysis of one characteristic (spectral). However, there are a number of limitations associated with this type of image acquisition. Analysis of wetlands has proved difficult because to obtain the data it is often linked to other purposes such as the analysis of land cover or land use. Practically, many natural wetlands are difficult to monitor as these areas are quite often difficult to access and require exposure to native wildlife and potential endemic disease. Methods to develop a classification system for specific biota of interest could assist with technological advances that will allow for identification at a very high accuracy rate. The issue of the cost and expertise involved in remote sensing technology is still a factor hindering further advancements in image acquisition and data processing. Future improvements in current wetland vegetation mapping could include the use of more recent and better geospatial data when it is available. List of wetland types - A—Marine and Coastal Zone wetlands - Marine waters—permanent shallow waters less than six metres deep at low tide; includes sea bays, straits - Subtidal aquatic beds; includes kelp beds, seagrasses, tropical marine meadows - Coral reefs - Rocky marine shores; includes rocky offshore islands, sea cliffs - Sand, shingle or pebble beaches; includes sand bars, spits, sandy islets - Intertidal mud, sand or salt flats - Intertidal marshes; includes saltmarshes, salt meadows, saltings, raised salt marshes, tidal brackish and freshwater marshes - Intertidal forested wetlands; includes mangrove swamps, nipa swamps, tidal freshwater swamp forests - Brackish to saline lagoons and marshes with one or more relatively narrow connections with the sea - Freshwater lagoons and marshes in the coastal zone - Non-tidal freshwater forested wetlands - B—Inland wetlands - Permanent rivers and streams; includes waterfalls - Seasonal and irregular rivers and streams - Inland deltas (permanent) - Riverine floodplains; includes river flats, flooded river basins, seasonally flooded grassland, savanna and palm savanna - Permanent freshwater lakes (> 8 ha); includes large oxbow lakes - Seasonal/intermittent freshwater lakes (> 8 ha), floodplain lakes - Permanent saline/brackish lakes - Seasonal/intermittent saline lakes - Permanent freshwater ponds (< 8 ha), marshes and swamps on inorganic soils; with emergent vegetation waterlogged for at least most of the growing season - Seasonal/intermittent freshwater ponds and marshes on inorganic soils; includes sloughs, potholes; seasonally flooded meadows, sedge marshes - Permanent saline/brackish marshes - Seasonal saline marshes - Shrub swamps; shrub-dominated freshwater marsh, shrub carr, alder thicket on inorganic soils - Freshwater swamp forest; seasonally flooded forest, wooded swamps; on inorganic soils - Peatlands; forest, shrub or open bogs - Alpine and tundra wetlands; includes alpine meadows, tundra pools, temporary waters from snow melt - Freshwater springs, oases and rock pools - Geothermal wetlands - Inland, subterranean karst wetlands - C—Human-made wetlands - Water storage areas; reservoirs, barrages, hydro-electric dams, impoundments (generally > 8 ha) - Ponds, including farm ponds, stock ponds, small tanks (generally < 8 ha) - Aquaculture ponds; fish ponds, shrimp ponds - Salt exploitation; salt pans, salines - Excavations; gravel pits, borrow pits, mining pools - Wastewater treatment; sewage farms, settling ponds, oxidation basins - Irrigated land and irrigation channels; rice fields, canals, ditches - Seasonally flooded arable land, farm land Variations of names for wetland systems: - "History of the Everglades | Everglades Forever | Florida DEP". Dep.state.fl.us. 2009-02-11. Retrieved 2012-05-23. - "Department of Environmental Protection State of Florida Glossary". State of Florida. Retrieved 2011-09-25. - Butler S., ed. (2010). Macquarie Concise Dictionary (5th ed.). Sydney Australia: Macquarie Dictionary Publishers Ltd. ISBN 978-1-876429-85-0. - "Official page of the Ramsar Convention". Retrieved 2011-09-25. - Keddy, Paul A. (2010). Wetland ecology : principles and conservation (2nd ed.). New York: Cambridge University Press. p. 497. ISBN 978-0521519403. - "Ramsar Convention Ecosystem Services Benefit Factsheets". Retrieved 2011-09-25. - "US EPA". Retrieved 2011-09-25. - Fraser, L; Keddy, PA, ed. (2005). The world's largest wetlands : their ecology and conservation. Cambridge, UK [u.a.]: Cambridge Univ. Press. p. 488. ISBN 978-0521834049. - "WWF Pantanal Programme". Retrieved 2011-09-25. - "Carpinteria Water District: Glossary of Terms". Cvwd.net. Retrieved 2012-05-23. - "Glossary". Mapping2.orr.noaa.gov. Retrieved 2012-05-23. - "Glossary". Alabama Power. Retrieved 2012-05-23. - Mitsch, William J.; James G. Gosselink (2007-08-24). Wetlands (4th ed.). New York: John Wiley & Sons. ISBN 978-0-471-69967-5. - Keddy (2010), p. 2. - "The Ramsar 40th Anniversary Message for November". Ramsar. Retrieved 2011-10-10. - EPA Regulations listed at 40 CFR 230.3(t) - Richardson, JL, Arndt, JL & Montgomery, JA 2001, ‘Hydrology of wetland and related soils’ in JL Richardson & MJ Vepraskas (eds), Wetland Soils, Lewis Publishers, Boca Raton - Vitt, DH & Chee, W 1990, 'The relationships of vegetation to surface water chemistry and peat chemistry in fens of Alberta, Canada', Plant Ecology, vol. 89, no. 2, pp. 87-106. - B.R. Silliman, E.D. Grosholz, and M.D. Bertness (eds.) 2009. Human Impacts on Salt Marshes. A Global Perspective. University of California Press, Berkeley, CA. - Smith, MJ, Schreiber, ESG, Kohout, M, Ough, K, Lennie, R, Turnbull, D, Jin, C & Clancy, T 2007 'Wetlands as landscape units: spatial patterns in salinity and water chemistry, Wetlands, Ecology & Management, vol. 15, no. 2, pp. 95-103. - Ponnamperuma, FN 1972, ‘The chemistry of submerged soils’, Advances in Agronomy, vol. 24, pp. 29–96. - Moore Jr., PA & Reddy, KR 1994, ‘Role of Eh and pH on phosphorus geochemistry in sediments of Lake Okeechobee, Florida’ Journal of Environmental Quality, vol. 23, pp. 955–964. - Minh LQ, Tuong TP, van Mensvoort MEF, Bouma J 1998 ‘Soil and water table management effects on aluminum dynamics in an acid sulphate soil in Vietnam’, Agriculture, Ecosystems & Environment, vol. 68, no. 3, pp. 255–262. - Schlesinger, WA 1997, Biogeochemistry: An analysis of global change, 2nd edn, Academic Press, San Diego - Bedford, BL 1996, 'The need to define hydrologic equivalence at the landscape scale for freshwater wetland mitigation', Ecological Applications, vol. 6, no. 1, pp. 57-68. - Nelson, ML, Rhoades, CC & Dwire, KA 2011, ‘Influences of Bedrock Geology on Water Chemistry of Slope Wetlands and Headwaters Streams in the Southern Rocky Mountains', Wetlands, vol. 31, pp. 251-261. - "Blacktown Council wetlands". Retrieved 2011-09-25. - Hutchinson, G. E. 1975. A Treatise on Limnology, Vol. 3, Limnological Botany. New York: John Wiley - Hutchinson, G. E. 1975. A Treatise on Limnology, Vol. 3, Limnological Botany. New York: John Wiley. - Hughes, F.M.R. (ed.). 2003. The Flooded Forest: Guidance for policy makers and river managers in Europe on the restoration of floodplain forests. FLOBAR2, Department of Geography, University of Cambridge, Cambridge, UK. 96 p - Wilcox, D.A, Thompson, T.A., Booth, R.K. and Nicholas, J.R. 2007. Lake-level variability and water availability in the Great Lakes. USGS Circular 1311. 25 p. - Goulding, M. 1980. The Fishes and the Forest:Explorations in Amazonian Natural History. Berkeley, CA: University of California Press - "Frogs as Ecosystem Indicators". - "Taken from Australian fauna". Australian fauna. Retrieved 2012-05-23. - "Taken from Blacktown Council Wetland Inventory". Blacktown.nsw.gov.au. Retrieved 2012-05-23. - "Ramsar Convention Technical Reports". - [dead link] - "United Nations Environment Programme (UNEP) - Home page". Retrieved 2011-12-11. - Brix, H 1993, ‘Wastewater treatment in constructed wetlands: system design, removal processes, and treatment performance’ in AG Moshiri (ed), Constructed Wetlands for Water Quality Improvement, CRC Press, Boca Raton, Florida. - Vymazal, J & Kröpfleova, L 2008, ‘Wastewater treatment in constructed wetlands with horizontal sub-surface flow’, Environmental Pollution, vol. 14. - "Arcata, California Constructed Wetland: A Cost-Effective Alternative for Wastewater Treatment". Ecotippingpoints.org. Retrieved 2012-05-23. - Timoshkin O.A, ed. (2004). "Index of animal species inhabiting Lake Baikal and its catchment area. Guides and Keys to Identification of Fauna and Flora of Lake Baikal". 2 1 (1st ed.) (Novosibirsk, Nauka: John Wiley & Sons). ISBN 5-02-031736-5. - "The Ramsar Information Sheet on Wetlands of International Importance". September 18, 2009. Retrieved November 19, 2011. - U.S. EPA, 2009, Synthesis of Adaptation Options for Coastal Areas, Washington DC US, Environmental Protection Agency, Climate Ready Estuaries Program, EPA 430-F-08-024 - Johnson W C, Millett B V, Gilmanov T, Voldseth R A, Guntenspergen G R & Naugle D E 2005 Vulnerability of Northern Prairie Wetlands to Climate Change Bio Science 10: 863-872 - "unknown title". New Scientist (1894): 46. 1993-10-09. - "Letting Nature Do the Job". Wild.org. 2008-08-01. Retrieved 2012-05-23. - dom=root&xml=index.xml "FAO". Retrieved 2011-09-25.[dead link] - "Good practices and lessons learned in integrating ecosystem conservation and poverty reduction objectives in wetlands". The Ramsar Convention on Wetlands. December 1, 2008. Retrieved November 19, 2011. - Hart T M & Davis S E, 2011, Wetland development in a previously mined landscape of East Texas, USA Wetlands Ecological Management, 19: 317-329 - "A Directory of Important Wetlands in Australia". Environment.gov.au. 2009-07-27. Retrieved 2012-05-23. - Brinson, M. (1993) A Hydrogeomorphic Classification of Wetlands*1987 U.S. Army Corps of Engineers Wetland delineation manual - Dugan, Patrick (editor) (1993) Wetlands in Danger, World Conservation Atlas Series Terra Nuova East Africa. Wetlands in drylands. - Fredrikson, Leigh H. (1983) "Wetlands: A Vanishing Resource" Yearbook of Agriculture - Fraser, L.H. and P.A. Keddy (eds.). 2005. The World’s Largest Wetlands: Ecology and Conservation. Cambridge University Press, Cambridge, UK. 488 p. - Ghabo, A. A. (2007) Wetlands Characterization; Use by Local Communities and Role in Supporting Biodiversity in the Semiarid Ijara District, Kenya. *Keddy, P.A. 2010. Wetland Ecology: Principles and Conservation (2nd edition). Cambridge University Press, Cambridge, UK. 497 pp. - MacKenzie W.H. and J.R. Moran (2004) "Wetlands of British Columbia: A Guide to Identification". Ministry of Forests, Land Management Handbook 52. - Maltby E. and Barker T. (eds) (2009) The Wetlands Handbook. Wiley-Blackwell, Oxford. 1058 pp. - Mitsch, W.J., J.G. Gosselink, C.J. Anderson, and L. Zhang. (2009) "Wetland Ecosystems". John Wiley & Sons, Inc., New York, 295 pp. |Wikimedia Commons has media related to: Wetlands|
http://wpedia.goo.ne.jp/enwiki/Wetland
13
100
In mathematics, a hyperbolic angle is a geometric figure that divides a hyperbola. The science of hyperbolic angle parallels the relation of an ordinary angle to a circle. The hyperbolic angle is first defined for a "standard position", and subsequently as a measure of an interval on a branch of a hyperbola. A hyperbolic angle in standard position is the angle at (0, 0) between the ray to (1, 1) and the ray to (x, 1/x) where x > 1. Note that unlike circular angle, hyperbolic angle is unbounded, as is the function ln x, a fact related to the unbounded nature of the harmonic series. The hyperbolic angle in standard position is considered to be negative when 0 < x < 1. Suppose ab = 1 and cd = 1 with c > a > 1 so that (a, b) and (c, d) determine an interval on the hyperbola xy = 1. Then the squeeze mapping with diagonal elements b and a maps this interval to the standard position hyperbolic angle that runs from (1, 1) to (bc, ad). By the result of Gregoire de Saint-Vincent, the hyperbolic sector determined by (a, b) and (c, d) has the same area as this standard position angle, and the magnitude of the hyperbolic angle is taken to be this area. The hyperbolic functions sinh, cosh, and tanh use the hyperbolic angle as their independent variable because their values may be premised on analogies to circular trigonometric functions when the hyperbolic angle defines a hyperbolic triangle. Thus this parameter becomes one of the most useful in the calculus of a real variable. Comparison with circular angle In terms of area, one can consider a circle of radius √2 for which the area of a circular sector of u radians is u. (The area of the whole circle is 2π.) As the hyperbola x y = 1, associated with the hyperbolic angle, has shortest diameter between (−1, −1) and (1, 1), it too has semidiameter √2. As shown in the diagram, a ray of slope less than one determines an angle u which is a circular angle of magnitude equal to a circular sector, or a hyperbolic angle. The circular and hyperbolic trigonometric function magnitudes are all √2 times the legs of right triangles determined by the ray, circle, and hyperbola. There is also a projective resolution between circular and hyperbolic cases: both curves are conic sections, and hence are treated as projective ranges in projective geometry. Given an origin point on one of these ranges, other points correspond to angles. The idea of addition of angles, basic to science, corresponds to addition of points on one of these ranges as follows: Circular angles can be characterised geometrically by the property that the if two chords P0P1 and P0P2 subtend angles L1 and L2 at the centre of a circle, their sum L1 + L2 is the angle subtended by a chord PQ, where PQ is required to be parallel to P1P2. The same construction can also be applied to the hyperbola. If P0 is taken to be the point (1,1), P1 the point (x1,1/x1), and P2 the point (x2,1/x2), then the parallel condition requires that Q be the point (x1x2,1/x11/x2). It thus makes sense to define the hyperbolic angle from P0 to an arbitrary point on the curve as a logarithmic function of the point's value of x. Whereas in Euclidean geometry moving steadily in an orthogonal direction to a ray from the origin traces out a circle, in a pseudo-Euclidean plane steadily moving orthogonal to a ray from the origin traces out a hyperbola. In Euclidean space, the multiple of a given angle traces equal distances around a circle while it traces exponential distances upon the hyperbolic line. The quadrature of the hyperbola is the evaluation of the area swept out by a radial segment from the origin as the terminus moves along the hyperbola, just the topic of hyperbolic angle. The quadrature of the hyperbola was first accomplished by Gregoire de Saint-Vincent in 1647 in his momentous Opus geometricum quadrature circuli et sectionum coni. As expressed by a historian, - [He made the] quadrature of a hyperbola to its asymptotes, and showed that as the area increased in arithmetic series the abscissas increased in geometric series. The upshot was the logarithm function, as now understood as the area under y = 1/x to the right of x = 1. As an example of a transcendental function, the logarithm is more familiar than its motivator, the hyperbolic angle. Nevertheless, the hyperbolic angle plays a role when the theorem of Saint-Vincent is advanced with squeeze mapping. Circular trigonometry was extended to the hyperbola by Augustus De Morgan's in his textbook Trigonometry and Double Algebra. In 1878 W.K. Clifford used hyperbolic angle to parametrize a unit hyperbola, describing it as "quasi-harmonic motion". In 1894 Alexander Macfarlane circulated his essay "The Imaginary of Algebra", which used hyperbolic angles to generate hyperbolic versors, in his book Papers on Space Analysis. When Ludwik Silberstein penned his popular 1914 textbook on the new theory of relativity, he used the rapidity concept based on hyperbolic angle a where tanh a = v/c, the ratio of velocity v to the speed of light. He wrote: - It seems worth mentioning that to unit rapidity corresponds a huge velocity, amounting to 3/4 of the velocity of light; more accurately we have v = (.7616) c for a = 1. - ... the rapidity a = 1, ... consequently will represent the velocity .76 c which is a little above the velocity of light in water. Imaginary circular angle The hyperbolic angle is often presented as if it were an imaginary number. In fact, if x is a real number and i2 = −1, then so that the hyperbolic functions cosh and sinh can be presented through the circular functions. But these identities do not arise from a circle or rotation, rather they can be understood in terms of infinite series. In particular, the one expressing the exponential function ( ) consists of even and odd terms, the former comprise the cosh function (), the latter the sinh function (). The infinite series for cosine is derived from cosh by turning it into an alternating series, and the series for sine comes from making sinh into an alternating series. The above identities use the number i to remove the alternating factor (−1)n from terms of the series to restore the full halves of the exponential series. Nevertheless, in the theory of holomorphic functions, the hyperbolic sine and cosine functions are incorporated into the complex sine and cosine functions. - Bjørn Felsager, Through the Looking Glass - A glimpse of Euclid’s twin geometry, the Minkowski geometry, ICME-10 Copenhagen 2004; p.14. See also example sheets exploring Minkowskian parallels of some standard Euclidean results - Viktor Prasolov and Yuri Solovyev (1997) Elliptic Functions and Elliptic Integrals, page 1, Translations of Mathematical Monographs volume 170, American Mathematical Society - Hyperbolic Geometry pp 5-6, Fig 15.1 - David Eugene Smith (1925) History of Mathematics, pp. 424,5 v. 1 - Augustus De Morgan (1849) Trigonometry and Double Algebra, Chapter VI: "On the connection of common and hyperbolic trigonometry" - Alexander Macfarlane(1894) Papers on Space Analysis, B. Westerman, New York, weblink from archive.org - Ludwik Silberstein (1914) Theory of Relativity, Cambridge University Press, pp. 180–1 - Janet Heine Barnett (2004) "Enter, stage center: the early drama of the hyperbolic functions", available in (a) Mathematics Magazine 77(1):15–30 or (b) chapter 7 of Euler at 300, RE Bradley, LA D'Antonio, CE Sandifer editors, Mathematical Association of America ISBN 0-88385-565-8 . - Arthur Kennelly (1912) Application of hyperbolic functions to electrical engineering problems - William Mueller, Exploring Precalculus, § The Number e, Hyperbolic Trigonometry. - John Stillwell (1998) Numbers and Geometry exercise 9.5.3, p. 298, Springer-Verlag ISBN 0-387-98289-2.
http://en.wikipedia.org/wiki/Hyperbolic_angle
13
94
Have your parents ever found you munching on candy and asked you, "How much candy did you eat?" Instead of saying, "I do not know?" and getting in trouble, maybe you would rather say, "I ate precisely 10.7 cubic milliliters of candy, Mom." Make your parents proud of their candy-eating genius child (you) with this simple science project. Investigate which formula is the most accurate for estimating the volume of an M&M'S® candy. Geometry is the study of how to use math to describe and investigate different points, lines, and shapes. The way that a shape is described in geometry is with a formula, which is simply a mathematical way to calculate different properties of a shape like size, area, or volume. Volume is a unique property of three-dimensional shapes because three-dimensional shapes take up space in three different directions. Most real-world objects are three dimensional: balls, cars, food, etc. The problem with geometric formulas is that they describe "perfect" or "ideal" shapes. A sphere is an "ideal" three-dimensional shape that is perfectly circular in all directions. Even though a ball is spherical in shape, it is not a perfect sphere. If geometric formulas describe "ideal" shapes and not "real" shapes, then how are they useful in the "real" world? Most real-world shapes are not simple shapes and use complex geometry to be calculated. The properties of real-world shapes can also be approximated, or estimated, to the best possible measure with a geometric formula. This is called making a geometric model, and the most important part of making a good geometric model is choosing the formula that best describes the object. Even the most irregular objects can be modeled by using geometry: cars, airplanes, electronics, plastics, food, etc. Geometric modeling is very important for manufacturing because a product needs to have the same shape, made the same way, every time. In this mathematics science project you will use geometry to produce a mathematical model of an M&M'S candy. If you look closely, you will see that the volume of an M&M'S candy is a bit irregular - it is not quite perfectly round. It looks like a ball shape (sphere) that has been squished on one side, as shown in Figure 1 below. You will test three different formulas (one for a sphere, one for a cylinder, and one for an ellipsoid) to see which formula makes the best geometric model of an M&M'S candy. You will test each formula by using it to calculate the volume of an M&M'S candy and then you will compare your result to the actual volume of a single piece of candy. An M&M'S candy looks like a sphere that has been flattened on one side. The blue M&M'S; on the left is shown from the top, while the orange M&M'S; on the right is shown from the side. Terms and Concepts What is a geometric model? Why can it be useful? Which formula do you think will calculate the most accurate volume of an M&M'S® candy? Why? How are geometric formulas different from each other? What other ways can you use geometric formulas to measure real-world objects? You can do further research by visiting the following websites, which give information about geometry and calculating areas and volumes: Mississippi State University. (n.d.). Agricultural and Biological Engineering: Tools - Unit-Free Volume Calculators. College of Agriculture and Life Sciences, Agricultural & Biological Engineering Department. Retrieved December 3, 2012, from First measure the actual volume of an M&M'S candy with a water displacement test. In your lab notebook, make a data table like Table 1 below. You will be recording your measurements in it. Fill the metric measuring glass or cup with 100 milliliters (mL) of water. Make sure it has exactly 100 mL. You can do this by looking at where the top of the water is when your eyes are level with it. Add 100 M&M'S to the water. Why do you think you are using 100 M&M'S instead of just one? Dropping just one M&M'S into a glass of water will not change the water level by much. By using 100 M&M'S you will be able to more easily see a larger change in the water level that will be easier to measure. You can then divide the change you see for a hundred M&M'S by the number 100 to calculate the volume of a single M&M'S candy. In the data table in your lab notebook, record the new, final volume of water. Estimate the new volume as closely as you can based on the marks on the glass. For example, if it is right between a mark that says "150" and one that says "175," then you can estimate that it is at about 163 mL. Subtract the beginning volume of water (100 mL) from the new volume of water (that you just measured) to calculate the actual volume of the 100 M&M'S. Write this in your data table. To continue the example above, if the volume for 100 M&M'S is 163 mL, then you do this calculation: 163 mL - 100 mL = 63 mL. Meaning that 100 M&M'S have a volume of 63 mL. Divide your answer by 100. This is the actual volume of a single M&M'S candy in milliliters. Write this answer in your data table. You will be referring to this value later. In our example you would do this calculation: 63 mL / 100 = 0.63 mL. Meaning that each M&M'S candy has a volume of 0.63 mL. Remember, this is just an example of the calculations. You will have to do the experiment yourself to see what the real volume is! Actual Volume (mL) Starting Volume (mL) Final Volume (mL) Actual Volume of 100 M&M'S (mL) Actual Volume of 1 M&M'S candy (mL) In your lab notebook, make a data table like this one. You will be recording your water displacement measurements in it, which you will be using to figure out the actual volume of 1 M&M'S candy. Next you will test different mathematical formulas to see which one is the best geometric model of an M&M'S® candy. Before doing this, make sure you do your background research and know what the terms radius, diameter, height, sphere, cylinder, and ellipsoid mean. You will be making some careful measurements with (fresh!) M&M'S candies to use in the different formulas. In your lab notebook, make a data table like Table 2 below to record your measurements in. Long Side (cm) Short Side (cm) Diameter of 10 M&M'S Diameter of 1 M&M'S candy (divide by 10) Radius of 1 M&M'S candy (divide by 2) In your lab notebook, make a data table like this one. You will be recording your measurements in it, which you will be using to do calculations with different formulas. Measure the long side of 10 fresh M&M'S lined up in a row. (Do not use any of the M&M'S that you used in the water displacement test!) Do this by using the following neat little trick: Place a piece of paper on a clean table or countertop. On top of the paper, place a small amount of clay or Play-Doh. Flatten it and stretch it out into a little line. Make it run along the length of the ruler. Line up 10 fresh M&M'S on their flat side, end-to-end, as shown in Figure 2 below. Poke them into some clay to keep them in a neat row with each M&M'S touching the next and no gaps in between them. To measure the long sides of 10 M&M'S, line them up length-wise in clay or Play-Doh, next to the ruler. This image only shows 4 M&M'S lined up, but you will be using 10 M&M'S. Measure the whole line of 10 M&M'S from end-to-end. Write this measurement in the new data table in your lab notebook. Write it in the "Long Side" column as the "Diameter of 10 M&M'S." Divide your answer by 10. This is the long diameter of a single M&M'S candy. Write the data in your data table. Divide your answer by 2. This is the long radius of a single M&M'S candy. Write the data in your data table. Remove the M&M'S from the clay. On the clay, line up the 10 M&M'S on their side so that you are measuring across the short side, or short diameter, as shown in Figure 3 below. Again, use the clay to hold the M&M'S in place in a neat row with each M&M'S® touching the next. To measure the short sides of 10 M&M'S, line them up on their side in clay or Play-Doh, next to the ruler. This image only shows 4 M&M'S lined up, but you will be using 10 M&M'S. Repeat steps 5-8, but this time measure and do calculations for the short side of the M&M'S. In the data table in your lab notebook, record your measurements in the "Short Side" column. Next you will be making some calculations of volume using different formulas to see which one best calculates the volume of the M&M'S. In your lab notebook, make a data table like Table 3 below to record your results for each formula. For the "Actual Volume" use the value you determined in step 1f. Note: Milliliters is the same as cubic centimeters (cm3). So, even though you determined the actual volume of one M&M'S candy in milliliters, you can write this value in cubic centimeters instead. Calculated Volume (cm3) Sphere - Long Radius Sphere - Short Radius In your lab notebook, make a data table like this one to record your results for each formula. Next we will calculate the volume of an ellipsoid. Click on the "Ellipsoid" link and you will see the calculator shown in Figure 6 below. Use the Ellipsoid Volume Calculator (from Dept. of A&BE at MSU, 2006) to determine the volume an M&M'S candy would have if it were a perfect cylinder. To use the ellipsoid volume calculator, do the following: Type the long diameter into the box under "Major Axis" and into the box under "Minor Axis" because in the case of M&M'S® they are actually the same. Then type the short diameter into the box under "Vertical Axis." Click "CALCULATE." Write the answer in your data table next to "Ellipsoid." Now you are ready to make a bar graph of your data. You can make one by hand or you can try using the Create a Graph website for kids from the National Center for Education Statistics. Along the x-axis (the horizontal axis), make one bar for each type of volume calculation you did, such as a sphere using the long radius, sphere using the short radius, cylinder, and ellipsoid. Also include a bar for the actual volume that you determined in step 1f. On the y-axis (the vertical axis) put the volume measurements in cubic centimeters (cm3). How do each of the different calculated volumes compare to the actual volume that you measured? Which ones were more and which ones were less? Why do you think this is? Which calculation came the closest? Which formula do you think is the best one to use for an M&M'S candy? Another way to look at your data is to calculate the difference between each calculation and the actual volume measurement. You can do this by subtracting the actual volume from the calculated volume for each formula. A bigger number is more different from the actual volume than a smaller number. You can also calculate something called the percent difference by dividing your answer by the actual volume. If you make another graph comparing the percent difference of each method, what does it show? You can use this same experiment to find the best formula to calculate any other volume. Try using it for an egg, a football, an apple, a bar of soap, or any other irregular shaped object. Just make sure that you choose an object that can safely be submerged in water! Which formula is the best? For a more advanced science project, you can try to investigate how the shape of a candy affects how well many of those candies pack together. Use the water displacement test on a couple differently shaped candies to determine the actual volume of a single candy. Then fill a measuring glass with a certain amount of each type of candy, one type at a time (without water). See how high this filled the glass and divide this total volume by the number of candies to determine how much space one candy took, on average, when taking packing into account. How much space does each type of candy take up in the measuring glass (when packing is taken into account) compared to the actual volume of one candy? In other words, which types of candies pack together the best? How do you think their shape affects this? The Ask an Expert Forum is intended to be a place where students can go to find answers to science questions that they have been unable to find using other resources. If you have specific questions about your science fair project or science fair, our team of volunteer scientists can help. Our Experts won't do the work for you, but they will make suggestions, offer guidance, and help you troubleshoot. If you like this project, you might enjoy exploring these related careers: Mathematicians are part of an ancient tradition of searching for patterns, conjecturing, and figuring out truths based on rigorous deduction. Some mathematicians focus on purely theoretical problems, with no obvious or immediate applications, except to advance our understanding of mathematics, while others focus on applied mathematics, where they try to solve problems in economics, business, science, physics, or engineering. Math teachers love mathematics and understand it well, but much more than that, they enjoy sharing their enthusiasm for the language of numbers with students. They use a variety of tools and techniques to help students grasp abstract concepts and show them that math describes the world around them. By helping students conquer fears and anxieties about math, teachers can open up many science and technology career possibilities for students. Teachers make a difference that lasts a lifetime! You can find this page online at: http://www.sciencebuddies.org/science-fair-projects/project_ideas/Math_p022.shtml?from=Blog You may print and distribute up to 200 copies of this document annually, at no charge, for personal and classroom educational use. When printing this document, you may NOT modify it in any way. For any other use, please contact Science Buddies.
http://www.sciencebuddies.org/science-fair-projects/project_ideas/Math_p022.shtml?from=Blog
13
70
Branches of Complex Functions 2.4 Branches of Functions In Section 2.2 we defined the principal square root function and investigated some of its properties. We left unanswered some questions concerning the choices of square roots. We now look at these questions because they are similar to situations involving other elementary functions. In our definition of a function in Section 2.1, we specified that each value of the independent variable in the domain is mapped onto one and only one value in the range. As a result, we often talk about a single-valued function, which emphasizes the "only one" part of the definition and allows us to distinguish such functions from multiple-valued functions, which we now introduce. Let denote a function whose domain is the set D and whose range is the set R. If w is a value in the range, then there is an associated inverse function that assigns to each value w the value (or values) of z in D for which the equation holds. But unless f takes on the value w at most once in D, then the inverse function g is necessarily many valued, and se say that g is a multivalued function. For example, the inverse of the function is the square root function . For each value z other than , then, the two points z and -z are mapped onto the same point ; hence g is in general a two-valued function. The study of limits, continuity, and derivatives loses all meaning if an arbitrary or ambiguous assignment of function values is made. For this reason, in Section 2.3 we did not allow multivalued functions to be considered when we defined these concepts. When working with inverse functions, you have to specify carefully one of the many possible inverse values when constructing an inverse function, as when you determine implicit functions in calculus. If the values of a function f are determined by an equation that they satisfy rather than by an explicit formula, then we say that the function is defined implicitly or that f is an implicit function. In the theory of complex variables we present a similar concept. Let be a multiple-valued function. A branch of f is any single-valued function that is continuous in some domain (except, perhaps, on the boundary). At each point z in the domain, assigns one of the values of . Associated with the branch of a function is the branch cut. We now investigate the branches of the square root function. Example 2.20. We consider some branches of the two-valued square root function , (where the principal square root function as where and so that . The function is a branch of . Using the same notation, we can find other branches of the square root function. For example, if we let so can be thought of as "plus" and "minus" square root functions. The negative real axis is called a branch cut for the functions . Each point on the branch cut is a point of discontinuity for both functions . Explore Solution 2.20. Example 2.21. Show that the function is discontinuous along the negative real axis. Solution. Let denote a negative real number. We compute the limit as through the upper half-plane and the limit as z approaches through the lower half-plane . In polar coordinates these limits are given As the two limits are distinct, the function is discontinuous at . Remark 2.4 Likewise, is discontinuous at . The mappings , , and the branch cut are illustrated in Figure 2.18. Explore Solution 2.21. (a) The branch (where ). (b) The branch (where ). Figure 2.18 The branches and of . We can construct other branches of the square root function by specifying that an argument of z given to lie in the interval . The corresponding branch, denoted , where and . The branch cut for is the ray , which includes the origin. The point , common to all branch cuts for the multivalued square root function, is called a branch point. The mapping and its branch cut are illustrated in Figure 2.19. Figure 2.19 The branch of . The Riemann Surface for A Riemann surface is a construct useful for visualizing a multivalued function. It was introduced by Georg Friedrich Bernhard Riemann (1826-1866) in 1851. The idea is ingenious - a geometric construction that permits surfaces to be the domain or range of a multivalued function. Riemann surfaces depend on the function being investigated. We now give a nontechnical formulation of the Riemann surface for the multivalued square root function. Figure 2.A A graphical view of the Riemann surface for . Consider , which has two values for any . Each function in Figure 2.18 is single-valued on the domain formed by cutting the z plane along the negative x axis. Let and be the domains of , respectively. The range set for is the set consisting of the right half-plane, and the positive v axis; the range set for is the set consisting of the left half-plane and the negative v axis. The sets are "glued together" along the positive v axis and the negative v axis to form the w plane with the origin deleted. We stack directly above . The edge of in the upper half-plane is joined to the edge of in the lower half-plane, and the edge of in the lower half-plane is joined to the edge of in the upper half-plane. When these domains are glued together in this manner, they form R, which is a Riemann surface domain for the mapping . The portions of that lie in are shown in Figure 2.20. (a) A portion of and its image under . (b) A portion of and its image under . (c) A portion of R and its image under . Figure 2.20 Formation of the Riemann surface for . The beauty of this structure is that it makes this "full square root function" continuous for all . Normally, the principal square root function would be discontinuous along the negative real axis, as points near but above that axis would get mapped to points close to , and points near but below the axis would get mapped to points close to . Exercises for Section 2.4. Branches of Functions Graphics for Complex Functions The Next Module is The Reciprocal Transformation w=1/z Return to the Complex Analysis Modules Return to the Complex Analysis Project This material is coordinated with our book Complex Analysis for Mathematics and Engineering. (c) 2012 John H. Mathews, Russell W. Howell
http://math.fullerton.edu/mathews/c2003/ComplexFunBranchMod.html
13
166
Chapters 29 and 30 Magnetism is the result of electric charge motion. As soon as an electric charge moves, it creates a magnetic effect perpendicular to its direction of motion. This can be easily verified by the a magnetized nail. If a steel nail is placed inside a coil ( A coil is a piece copper wire wrapped around a cylinder) that is connected to a battery, the nail becomes a magnet. Since the nail itself is perpendicular to the surface of each loop of the coil; therefore, we can say that the current, or the motion of electric charges is perpendicular to the nail (magnetic field). See the figure shown below: The reason for naming the poles of the nail as N and S is that if a magnet is hung at its middle by a string, it turns and aligns itself toward almost the North and South poles of the Earth. The end pointing to the North is called its North pole and the other end pointing to the South pole is called the South pole of the magnet. The Geographic North and the Magnetic North are off by a few degrees however, and the position of the magnetic north changes slightly over years. The reason for the magnetic effect of the Earth is also the motion of charged particles. The molten metal at the core of the Earth is ionized and has a rotational motion parallel to the Equator creating a huge magnet whose field must be perpendicular to the Equator plane and therefore passes through the North pole and the South pole of the Earth. Magnetic Field Lines: Magnetic field lines are generally accepted to emerge from the North pole of a magnet and enter its South pole as shown. This can be easily verified by placing a tiny compass around a bar-magnet at different positions. It is impossible to separate the North and South poles of a magnet. They coexist. If a bar magnet is cut from the middle ( its neutral line), each piece becomes an independent magnet possessing a South pole and a North pole. This is not the case with electric charges. It is possible to have a separate negative charge and a separate positive charge. The Theory Behind Magnetism As we know, atoms are made of negative electrons, positive protons, and neutral neutrons. Protons and neutrons have almost the same mass but are much heavier than electrons. Protons and neutrons form the nucleus. Electrons orbit the nucleus. Electrons are considered the moving charges in atoms. Electrons generate magnetic fields perpendicular to their planes of rotation. The following argument gives you an idea of how some materials exhibit magnetic property although it does not reflect the exact picture. Visualize a single electron orbiting the nucleus of its atom. For simplicity, visualize a sphere in which this electron spins at a rate of say 1015 turns per second. This means one thousand trillion turns every second. Therefore, at every one-thousand-trillionth of a second it possesses a particular plane of rotation in space. In other words, the orientation of its plane of rotation changes 1015 times every second. That is why we say it creates an electronic cloud. Three of such orientations are sketched below. For each plane of rotation, the magnetic field vector is shown to have its maximum effect at the center of the circle of rotation and perpendicular to that circle. An object of mass 1 lb. for example, contains a very large number of atoms, and each atom depending on the element contains several electrons and each electron at any given instant of time has its own orientation of rotation and its own orientation of magnetic field vector. We are talking about hundreds of trillion trillions different magnetic field vectors in a piece of material. There is no guarantee that all such electrons have their magnetic field vectors oriented in a single direction so that their magnetic effects add up. An orbital is a space around a nucleus where the possibility of finding electrons is high. An orbital can be spherical, dumbbell-shaped, or of a few other geometric shapes. We assumed a spherical orbital for simplicity. Each orbital can be filled with 2 electrons. The two electrons in each orbital have opposite directions. This causes the magnetic field vectors have opposite directions as well. The result is a zero net magnetic effect. This way, each atom that contains an even number of electrons will have all of its orbitals filled with pairs of electrons. Such atoms are magnetically neutral. However, atoms that contain odd numbers of electrons will have an orbital that is left with a single electron. Such atoms are not magnetically neutral by themselves. They become magnetically neutral, when they form molecules with the same or other atoms. There are a few elements such as iron, cobalt, and nickel that have a particular atomic structure. This particular structure allows orbitals to have single (unpaired) electrons. Under normal circumstances, there is no guarantee that all orbitals of the atoms in a piece of iron, for example, to have their magnetic field vectors lined up parallel to each other. But if a piece of pure iron is placed in an external magnetic field, the planes of rotation of those single electrons line up such that their magnetic fields line up with the direction of the external field, and after the external field is removed, they tend to keep the new orientation and therefore the piece of pure iron becomes a magnet itself. The conclusion however is the fact that magnetism is the result of electric charge motion and that the magnetic field vector is perpendicular to the plane of rotation of electron. Like poles and Unlike poles: Like poles repel and unlike poles attract. This is similar to electric charges. Recall that like charges repel and unlike charges attract. One difference is that separate positive and negative charges are possible while a separate North poles and South poles are not. See the figure shown: Uniform Magnetic Fields: The magnetic field lines around a bar magnet is not uniform and varies with distance from its poles. The reason is that the field lines around a bar magnet are not parallel. The density of field lines is a function of distance from the poles. In order to make parallel magnetic field lines, the bar magnet must be bent into the shape of a horseshoe. The field lines that emerge from the N-pole of the magnet then have to directly enter its S-pole and become necessarily parallel. This is true only for the space in between the poles. This is shown in the figure below. Force of Magnetic Field on a Moving Charge: When a moving charge enters a magnetic field such that field lines are crossed, the charge finds itself under a force perpendicular to the direction of motion that gives it a circular motion. If a charge enters a field such that its motion direction is parallel to the field lines and no field line is crossed, then the charge will not be affected by the field and keeps going straight. When you are in a classroom facing the board, visualize a downward uniform magnetic field (ceiling being the North pole and floor the South pole of the magnet). Visualize a positive charge entering from the left side of the classroom going toward the right side. This positive charge will initially be forced toward the board as shown: If the downward field vector is (B), and the charge velocity crossing the magnetic field to the right and at right angle is (v), the magnitude of force (F) initially pushing the moving charge toward the board is given by: F = q v B. Fig.2 If charge (q) is making an angle θ with the magnetic field lines, then (F) will have a smaller value given by: F = q v B sin θ. Fig. 1 If (v) is to the left, (F) will be toward the class. Also if (B) is upward, F will be toward the class. If the charge is negative, (F) will be toward the class. Therefore, there are three elements that can affect the direction of (F). If any two of these three elements reverse simultaneously, the direction of (F) will remain the same. The unit for B is expressed in Tesla (T). If (F) is in Newtons, (v) in m/s, and (q) in Coulombs, then, (B) will be in Tesla. One Tesla of magnetic field strength is the strength that can exert a force of 1N on 1Coul. of electric charge that is moving at a speed of 1 m/s perpendicular to the magnetic field lines. Example 1: A 14-μC charge enters from the left perpendicular to a downward magnetic field of strength 0.030 Tesla at a speed of 1.8x105 m/s. Find the magnitude and direction of the initial force on it as soon as it crosses a field line. Refer to Fig.2. Solution: Referring to Fig. 2, it is clear that the charge will initially be pushed toward the board. The magnitude of this initial push is F = q v B ; F = ( 14-μC )( 1.8x105 m/s)( 0.030 T) = 0.076N. Example 2: A 14-μC charge enters from the left through a 65° angle with respect to a downward magnetic field of strength 0.030 Tesla at a speed of 1.8x105 m/s. Find the magnitude and direction of the initial force on it as soon as it crosses a field line. Refer to Fig.1. Solution: Referring to Fig. 1, it is clear that the charge will initially be pushed toward the board. The magnitude of this initial push is F = q v Bsin θ ; F = ( 14-μC )( 1.8x105 m/s)( 0.030 T) sin (65° )= 0.069N. Example 3: An electron enters a 0.013-T magnetic field normal to its field lines and experiences a 3.8x10-15N force. Determine its speed. Solution: F = q v B ; v = (F / qB) = ( 3.8x10-15N) / [(1.6x10-19C)(0.013T)] = 1.8x106 m/s Motion of a charged Particle in a Magnetic Field: So far, we have learned that when a charged particle crosses magnetic field lines, it is forced to change direction. This change of direction does not stop as long as there are field lines to be crossed in the pathway of motion of the charged particle. Magnetic field lines keep changing the direction of motion of the charged particle and if the field is constant in magnitude and direction, it gives a circular motion to the charged particle. The reason is that F is perpendicular to V at any instant and position and that exactly defines the concept of centripetal force. Recall the concept of centripetal force. Centripetal force, Fc, is always directed toward the center of rotation. Such force makes an object of mass M traveling at speed V to go around a circule of radius R. In fact it is the force of magnetic field, Fm, that supplies the necessary centripetal force, Fc. We may equate the two after comparing the two figures below: This formula is useful in finding the radius of curvature of the circular path of a charged particle when caught in a magnetic field. Example 4: A proton ( q = 1.6x10-19C, M = 1.67x10-27kg) is captured in a 0.107-T magnetic field an spins along a circle of radius 4.5 cm. Find its speed knowing that it moves perpendicular to the field lines. Solution: R = (Mv) / (qB) ; solving for v: v = rqB / M ; v = (0.045m)(1.6x10-19C )(0.107T) / 1.67x10-27kg = 4.6x105 m/s Example 5: In a certain device, alpha-particles enter a 0.88-T magnetic field perpendicular to its field lines. Find the radius of rotation they attain if each carries an average kinetic energy of 520 keV. An alpha-particle is a helium nucleus. It contains 2 protons and 2 neutrons. Mp = 1.672x10-27kg and Mn = 1.674x10-27kg. Solution: Since the K.E. of each alpha-particle is given, knowing its mass ( 2Mp + 2Mn ), its speed can be calculated. K.E. = (1/2)Mv2 . Note that 1eV = 1.6x10-19J ; therefore, 1keV = 1.6x10-16J. K.E. = (1/2)Mv2 ; 520(1.6x10-16J) = (1/2)[ 2(1.672x10-27kg) + 2(1.674x10-27kg) ] v2 v = 5.0x106 m/s ; R = (Mv) / (qB) ; R = [ 2(1.672x10-27kg) + 2(1.674x10-27kg) ](5.0x106 m/s) / [(2 x 1.6x10-19C)( 0.88T)] Each alpha-particle has 2 protons and carries 2 x 1.6x10-19C of electric charge. R = 0.12m = 12cm. Test Yourself 1: 1) In magnetizing a nail that is wrapped around with a coil of wire, the direction of the electric current in the loops of the wire is (a) parallel to the nail. (b) perpendicular to the nail if the loops are closely packed. (c) almost perpendicular to the nail if the loops are not closely packed. (d) b & c. click here 2) The direction of the magnetic field in a magnetized nail is (a) along the nail. (b) perpendicular to the nail. (c) neither a nor b. 3) If the four bent fingers of the right hand point in the direction of current in the loops of a magnetized coil, then the thumb points to (a) the South pole of the magnet coil. (b) the North pole of the magnet coil. (c) the direction normal to the magnet coil. click here 4) The magnetized nail experiment shows that (a) magnetic field occurs anywhere that there is an iron core. (b) anywhere a charged particle moves, magnetic effect develops in all directions. (c) anywhere a charged particle moves, there appears a magnetic effect that is normal to the direction of the charged particle's motion. 5) An electron orbiting the nucleus of an atom (a) does not develop a magnetic field because its radius of rotation is extremely small. (b) generates a magnetic effect that is of course normal to its plane of rotation at any instant. (c) cannot generate any magnetic effect because of its extremely small charge. click here 6) In a hydrogen molecule, H2 , the net magnetic effect caused by the rotation of its two electrons is zero because (a) at any instant, the two electrons spin in opposite directions creating opposite magnetic effects. (b) the instant its two electrons pass by each other, they repel and change planes of rotation that are opposite to each other causing opposite magnetic effects. (c) both a & b. 7) The reason that atoms, in general, are magnetically neutral is that (a) electrons of atoms must exist in pairs spinning in opposite directions thereby neutralizing each other's magnetic effect. (b) not all atoms are iron atoms and therefore do not have any magnetic effect in them. click here 8) The reason iron and a few other elements can maintain magnetism in them is that (a) these elements can have orbits in them that contain unpaired electrons. (b) under an external magnetic field, the orbits in these element with a single electron in them can orient themselves to the direction of the external field and stay that way. (c) both a & b. 9) For a bar-magnet, the magnetic field lines (a) emerge from its South pole and enter its North pole. (b) emerge from its North pole and enter its South pole. (c) emerge from its poles and enter its middle, the neutral zone. click here 10) If a bar magnet is cut at its middle, the neutral zone, (a) one piece becomes a pure North pole and the other piece a pure South pole. (b) both pieces will have their own South and North poles because magnetic poles coexist. (c) neither a nor b. 11) The magnetic field strength around a bar magnet is (a) uniform. (b) nonuniform that means varies with distance from its poles. (c) uniform at points far from the poles. click here 12) The magnetic field in between the poles of a horseshoe magnet is (a) uniform. (b) nonuniform. (c) zero. 13) The magnetic field in between the poles of a horseshoe magnet (a) varies with distance from its either pole. (b) is directed from N to S. (c) has a constant magnitude and direction and is therefore uniform. (d) b & c. click here Problem: Visualize you are sitting in a class facing the board. Suppose that the ceiling is the North pole of a huge horseshoe magnet and the floor is its South pole; therefore, you are sitting inside a uniform downward magnetic field. Also visualize a fast moving positive charge emerges from the left wall and is heading for the right wall; in other words, the velocity vector of the positive charge acts to the right. Answer the following questions: 14) The charge will initially be pushed (a) toward you. (b) downward. (c) toward the board. 15) The charge will take a path that is (a) straight toward the board. (b) circular at a certain radius of rotation. (c) curved upward. click here 16) If the radius of curvature is small such that the charge does not leave the space between the poles of the magnet it will have a circular motion that looking from the top will be (a) counterclockwise. (b) clockwise. (c) oscillatory. 17) If instead, a negative charge entered from the left side, it would spin (a) counterclockwise. (b) clockwise. 18) If a positive charge entered from the right side heading for the left, looking from the top again, it would spin (a) clockwise. (b) counterclockwise. click here 19) If the polarity of a magnetic field is reversed, the spin direction of a charged particle caught in it will (a) remain the same. (b) reverse as well. 20) The force, F of a magnetic field, B on a moving charge, q is proportional to the (a) filed strength, B. (b) particle's velocity, V. (c) the amount of the charge, q. (d) sinθ of the angle V makes with B. (e) a, b, c, & d. 21) The force, F of a magnetic field, B on a moving charge, q is given by (a) F = qB. (b) F = qV. (c) F = qvBsinθ. 22) In the formula F = qvBsinθ, if q is in Coulombs, v in m/s, and F in Newtons, then B is in (a) N/(Coul. m/s). (b) Tesla. (c) a & b. click here 23) The magnitude of the force that a 0.0025-T magnetic field exerts on a proton that enters it normal to its field lines and has a speed of 3.7x106 m/s is (a) 1.5x1015N !! (b) 0 (c) 1.5x10-15N 24) An electron moves at a speed of 7.4x107 m/s parallel to a uniform magnetic field. The force that the magnetic field exerts on it is (a) 3.2x10-19N. (b) 0 (c) 4.8x10-19N. click here 25) The force that keeps a particle in circular motion is (a) circular force. (b) centripetal force. (c) tangential force. 26) When a charged particle is caught in a magnetic field and it keeps spinning at a certain radius of rotation, the necessary centripetal force is (a) the force of magnetic field on it that keeps it spinning. (b) the electric force of the charged particle itself. (c) both a & b. click here 27) Equating the force of magnetic field, Fm and the centripetal force, Fc is like (a) Mv/R = qvB. (b) v2/R = qvB. (c) Mv2/R = qvB. 28) In the previous question, solving for R yields: (a) R = qvB/(Mv2). (b) R = Mv /(qB). (c) both a & b. 29) The radius of rotation a 4.0-μCoul. being carried by a 3.4-μg mass moving at 360 m/s normal to a 0.78-T magnetic field attains is (a) 39cm. (b) 3.9m (c) 7.8m click here 30) One electron-volt of energy (1 eV) is the energy of (1 electron) in an electric field where the potential is ( 1 Volt). This follows the formula (a) P.E. = qV where q is replaced by the charge of 1 electron and V is 1volt. (b) P.E. = Mgh. (c) neither a nor b. 31) Knowing that 1eV = 1.6x10-19J, if a moving proton has an energy of 25000eV, its energy calculated in Joules is (a) 4.0x10-15J (b) 1.56x1023 J (c) 4.0x10-19J click here 32) A 25-keV proton enters a 0.014-T magnetic field normal to its field lines. Each proton has a mass of 1.67x10-27kg. The radius of rotation if finds is (a) 1.63m (b) 2.63m (c) 3.63m. It is possible to run a charged particle through a magnetic field perpendicular to the field lines without any deviations from straight path. All one has to do is to place an electric field in a way that neutralizes the effect of the magnetic field. Let's visualize sitting in a classroom (facing the board of course) in which the ceiling is the N-pole, floor the S-pole, and a positive charge is to travel from left to right normal to the downward magnetic field lines. As was discussed before, the magnetic field does initially push the positive charge toward the board. Now, if the board is positively charge and the back wall negatively charged, the charged particle will be pushed toward the back wall by this electric field. It is possible to adjust the strengths of the magnetic and electric fields such that the forces they exert on the charge are equal in magnitude but opposite in direction. This makes the charge travel straight to the right without deviation. The resulting apparatus is called a "velocity selector." For a velocity selector we may set the magnetic force on the charge equal to the electric force on the charge: Fm = Fe. This results in q v B = q E ; v B = E , or v = E / B . If there is a large number of charged particles traveling at different speeds but in the same direction, and we want to separate the ones with a certain speed from the rest, this device proves useful. Example 6: In a left-to-right flow of alpha-rays (Helium nuclei) coming out of a radioactive substance a 0.0225-T magnetic field is placed in the downward direction. What magnitude electric field should be placed around the flow such that only 0.050MeV alpha-particles survive both fields? Solution: K.E. = 0.050 MeV means 0.050 mega electron-volts that means 5.0x104 electron volts. Since each eV is equal to 1.6x10-19 Joules ; therefore, K.E. = 0.050 x 106 x 1.6x10-19 J ; K.E. = 8.0x10-15J. To find v from K.E., use K.E. = (1/2)Mv2. Using M = 6.692x10-27kg (verify) for the mass of an alpha-particle, the speed v is v = 1.5x106 m/s. Using the velocity selector formula: v = E / B ; E = vB ; E = (1.5x106 m/s)(0.0225 T) = 34000 N/C. An application of the foregoing discussion is in cyclotron. Cyclotron is a device that accelerates charged particles for nuclear experiments. It works on the basis of the motion of charged particles in magnetic fields. When a particle of mass M and charge q moving at velocity v is caught in a magnetic field B, as we know, it takes a circular path of radius R given by R = Mv / (qB) . The space in which the particles spin is cylindrical. To accelerate the spinning particles to higher and higher velocities, the cylinder is divided into two semicylinders called the "Dees." The dees are connected to an alternating voltage. This makes the polarity of the dees alternate at a certain frequency. It is arranged such that when positive particles are in one of the dees, that dee becomes positive to repel the positive particles and the other dee is negative to attract them and as soon as the particles enter the negative dee, the polarity changes, the negative dee becomes positive to repel them again. This continual process keeps accelerating the particles to a desired speed. Of course, as the speed changes, the particles acquire greater and greater radii to where they are ready to leave the cylindrical space at which point they bombard the target nuclei under experiment. A sketch is shown below: If the speed of the particles become comparable to the speed of light (3.00x108 m/s), their masses increase according to the Einstein's theory of relativity. The mass increase must then be taken into account when calculating the period of rotation and energy of the particles. These type of calculations are called the "relativistic" calculations. Period of Rotation: Period (T), the time it takes for a charged particle to travel one circle or 2πR, can be calculated. From the definition of speed V = 2πR / T, solving for T yields: T = 2πR / v (*) v may be found from the formula for radius of rotation R = Mv /(qB). This yields: v = qBR / M. Substituting for v in (*), yields: T = 2πM / (qB) Example 7: In a cyclotron, protons are to be accelerated. The strength of the magnetic field is 0.024 Tesla. (a) Find the period of rotation of the protons, (b) their frequency, (c) their final speed if the final radius before hitting the target is 2.0m, and (d) their K.E. in Joules and eVs. Solution: (a) T = 2πM / (qB) ; T = (2π x 1.672x10-27kg) / (1.6x10-19C)(0.024T) = 2.7x10-6 s (b) f = 1 / T ; f = 3.7x105 s-1 or f = 3.7x105 Hz. (c) v = Rω ; v = R (2πf) ; v = (2.0m)(2π)(3.7x105s-1) = 4.6x106 m/s This speed (although very high) is still small enough compared to the light speed ( 3.00x108 m/s ) that the relativistic effects can still be neglected. (d) K.E. = (1/2)Mv2 ; K.E. = (1/2)(1.672x10-27kg)(4.6x106m/s)2 = 1.8x10-14 J K.E. = 1.8x10-14 J ( 1 eV / 1.6x10-19 J) = 110,000eV = 110KeV = 0.11 MeV An Easy Relativistic Calculation: According to the Einstein's theory of relativity, when mass M travels close to speed of light it becomes more massive and more difficult to accelerate further. The mass increase effect is given by the following formula: Example 8: In a cyclotron electrons are accelerated to a speed of 2.95x108 m/s. (a) By what factor does the electron mass increase? Knowing that the rest mass of electron Mo = 9.108x10-31kg, determine (b) its mass at that speed. Solution: (a) Let's find γ step by step. Let's first find (v/c), then (v/c)2 that is the same as v2/c2, then 1- v2/c2, then the square root of it, and finally 1 over that square root. The sequence is as follows: (v / c) = 2.95 / 3.00 = 0.98333... (Note that 108 powers cancel) v2 / c2 = (v / c)2 = (0.98333...)2 = 0.96694... 1 - v2 / c2 = 1 - 0.96694... = 0.0330555... SQRT( 1 - v2 / c2 ) = 0.1818119 γ = 1 / SQRT( 1 - v2 / c2 ) = 1 / 0.1818119 = 5.50 (The # of times mass of electron increases) (b) M = Moγ ; M = ( 9.108x10-31kg )( 5.50 ) = 5.01x10-30 kg Example 9: Find the value of γ for protons in Example 7. Solution: To be done by students Sources of Magnetic Field: Aside from permanent magnets, magnetic fields are mostly generated by coils of wire. A coil is a wire wrapped around a cylinder. Most coils are cylindrical. Long coils produce a fairly uniform magnetic field inside them specially toward their middle and along their axis of symmetry. To understand the magnetic field inside a coil, we need to know that magnetic field around a long straight wire as well as that of a single circular loop. Magnetic Field Around a Straight and Long Wire: For a very long and straight wire carrying a current I, we expect the magnetic field B to be perpendicular to the direction of the current. Since anywhere around the wire this property must equally exist, the magnetic field lines are necessarily concentric circles with the current (the wire) perpendicular to the planes of the circles at their common center. The figure is shown below: As r increases, B of course decreases as is also apparent from the equation shown above. The direction of B is determined by the right-hand rule again. If the thumb shows the direction of I, the four bent fingers point in the direction of B. In the above formula μo = 4π x 10-7 Tm/A is called the permeability of free space (vacuum) for the passage of magnetic field lines. For any material or substance permeability μ may be measured. For every material a constant may then be defined that relates μ to μo. Example 10: If in the above figure, I = 8.50Amps, determine the magnitude of B at r = 10.0cm, 20.0cm, and 30.0cm. Solution: Using the formula B = μoI / 2πr, we get: ; B1 = 1.7x10-5 Tesla B2 = 8.5x10-6 Tesla ; B3 = 5.7x10-6 Tesla Magnetic Field of a Current-Carrying Circular Loop: The magnetic field produced by a current-carrying circular loop is necessarily perpendicular to the plane of the loop. The reason is that B must be perpendicular to the current I that the loop carries. The direction is determined by the right-hand rule as was discussed in the nail example at the beginning of this chapter. The magnitude at the center of the loop is given by the following formula. Pay attention to the figure as well. Example 11: If in the above figure, I = 6.80Amps, determine the magnitude of B for r = 10.0cm, 20.0cm, and 30.0cm. Solution: Using the formula B = μoI / 2r, we get: ; B1 = 4.3x10-5 Tesla B2 = 2.1x10-5 Tesla ; B3 = 1.4x10-5 Tesla Magnetic Field Inside a Solenoid: A solenoid is a long coil of wire for which the length-to-radius ratio is not under about 10. The magnetic field of a single loop of wire is weak. A solenoid has many loops and the field lines inside it specially in the vicinity of its middle are fairly parallel and provide a uniform and stronger field. Placing an iron core inside the solenoid makes the field even stronger, some 400 times stronger. μiron = 400 μo. (See figure below). The formula for magnetic field strength of a solenoid is: B = μonI where n is the number of turns per unit length. n in SI units is # of turns per meter of the solenoid. Example 11: A solenoid is 8.0cm long and has 2400 turns. A 1.2-A current flows through it. Find (a) the strength of the magnetic field inside it toward the middle. (a) If an iron core is inserted in the solenoid, what will the field the field strength be? Solution: (a) B = μonI ; B = (4π x 10-7 Tm/A)(2400turn / 0.080m) )(1.2A ) = 0.045 Tesla. (b) Iron increases μo by a factor of 400; therefore, (400)(0.045T) = 20T One Application of Solenoid: Anytime you start you car, a solenoid similar to the one in the above example gets magnetized and pulls in an iron rod. The strong magnetic field of the solenoid exerts a strong force on the iron rod (core) and gives it a great acceleration and high speed within a short distance. The rod is partially in the solenoid to begin with and gets fully pulled in after the solenoid is connected to the battery by you when you try to crank the engine. The current that feeds the solenoid may not be even one amp, but the connection it causes between battery and starter pulls several amps from the battery. The forceful moving rod collides with a copper connector that connects the starter to the battery. This connection allows a current of 30Amps to 80Amps to flow through the starter motor and crank your car. The variation of the amperage depends on how cold the engine is. The colder the engine, the less viscous the oil, and the more power is needed to turn the crankshaft. Example 12: The magnetic field inside a 16.3cm long solenoid is 0.027T when a current of 368 mA flows through it. How many turns does it have? Solution: B = μonI ; n = B /(μoI) ; n = 0.027T / [(4π x 10-7 Tm/A)(0.368A )] = 58400 turns /m This is the number of turns per meter. If the solenoid was 1.00m long, it would have 58400 turns. It is only 0.163m long, and therefore it has less number of turns. If N is the number of turns, we may write: N = nL ; N = (58400 turns / m)(0.163m) = 9520 turns. Definition of Ampere: We defined 1A to the flow of 1C of electric charge in 1s. A preferred definition for the unit of electric current or Ampere is made by using the force per unit length that two infinitely long parallel wires exert on each other. Recall that when an electric current flows through an infinitely long and straight wire, it generates a magnetic field around it that can be sensed along concentric circles perpendicular to the wire. If two of such wires are parallel to each other and the current in them flow in the same direction, they attract each other. If the current flows in them in opposite directions, they repel each other. The magnitude of the force they exert on each other depends on the distance between the wires and the currents that flow through them. If two parallel wires that are 1m apart have equal currents flowing in them in the same direction, and the two wires attract each other with a force of 10-7N/m in vacuum, then the current through each wire is 1Amp. Test Yourself 2: 1) A velocity selector takes advantage of (a) two perpendicular electric fields. (b) a set of perpendicular electric and magnetic fields. (c) two perpendicular magnetic fields. click here 2) The forces (Fm and Fe) that the magnetic and electric fields of a velocity selector exert on a charge q, must be (a) equal in magnitude. (b) opposite in direction. (c) both a & b. 3) Fm and Fe in a velocity selector are given by (a) Fm = qvB and Fe = qE. (b) Fm = qB and Fe = qE. (c) Fm=qvB and Fe=qE such that Fm = Fe. click here 4) Setting Fm = Fe and solving for v results in (a) V = B/E. (b) V = E/B. (c) V =EB. 5) The formula V = E / B (a) depends on the amount of charge. (b) does not depend on the amount of the charge. (c) does not depend on the sign of the charge. (d) b & c. 6) What strength uniform electric field must be placed normal to a 0.0033-T uniform magnetic field such that only charged particles at a speed of 2.4x106m/s get passed through along straight lines? (a) 7900N/Coul. (b) 9700N/Coul. (c) 200N/Coul. click here 7) A cyclotron is a device that is used to (a) accelerate charged particles to high speeds and energies. (b) accelerate charged particles to speeds close to that of light. (c) perform experiments with the nuclei of atoms. (d) a, b, & c. 8) Speed of light is (a) 3.00x108m/s. (b) 3.00x10-8m/s. (c) 3.00x105 km/s. (d) a & c. 9) A speed of 3.00x10-8m/s is (a) faster than speed of light. (b) slower than the motion of an ant. (c) even germs may not move that slow. (d) extremely slow, almost motionless. (e) b, c, and d. click here 10) In a cyclotron, a charged particle released near the center (a) finds itself in a perpendicular magnetic field and starts spinning. (b) spins at a certain period of rotation given by T = 2πM/(qB). (c) is also under an accelerating electric field that alternates based on the period of rotation of the charged particle. (d) a, b, and c. 11) As the particles in a cyclotron accelerate to high speeds comparable to that of light (a) a mass increase must be taken into account. (b) the mass increase affects the period of rotation. (c) a & b. 12) The magnetic field around a current-carrying long wire (a) is perpendicular to the wire and at equal distances from the wire has the same magnitude. (b) is parallel to the wire. (c) both a and b. click here 13) The magnetic field around a current-carrying long wire (a) may be pictured as concentric circles at which the field vectors act radially outward. (b) may be pictured as concentric circles at which the field vectors act tangent to the circles. (c) has a constant magnitude that does not vary with distance from the wire. (d) b & c. 14) The formula for the field strength around a current carrying long wire is (a) B = μoI / (2πR). (b) B = μoI / (2R). (c) B = I / (2R). click here 15) μo= 4π x 10-7 Tm/Amp is called (a) the permittivity of free space for the passage of electric field effect. (b) the permeability of free space for the passage of the magnetic field effect. (c) neither a nor b. 16) The farther from a wire that carries a current, the (a) stronger the magnetic effect. (b) the more constant the magnetic effect. (c) the weaker the magnetic effect. click here Problem: Draw two concentric circles in a plane perpendicular to a wire that passes through the center of the circles. Suppose that the wire carries a constant electric current, I, upward. Also suppose that the radius of the greater circle is exactly twice that of the smaller circle. You also know that if you were to show magnetic field vectors, you would draw them tangent to those circles. Draw a vector of length say 1/2 inch tangent to the greater circle as the magnitude of B at that radius. Then draw another vector tangent to the smaller circle to represent the field strength at the other radius. Answer the following questions: 17) The magnitude of the field strength at the smaller radius is (a) a vector of length 1 inch. (b) a vector of length 1/4 inch. (c) a vector of length 1/16 inch. click here 18) Based on the upward current in the wire, and looking from the top, the direction of vectors you draw must be (a) clockwise. (b) counterclockwise. 19) What should be the length of the tangent vector you may draw at another circle whose radius is 5 times that of the smaller circle? (a) 1/25 inch. (b) 1/125 inch. (c) 1/5 inch. 20) The magnetic field that a current carrying loop of wire (circular) generates is (a) perpendicular to the plane of the loop. (b) has its maximum effect at the center of the loop and normal to it. (c) has an upward direction if the current flows in the circular loop horizontally in the counterclockwise direction. (d) a, b, & c. click here 21) A solenoid is a coil whose length is (a) at most 5 times its radius. (b) at least 10 times its radius. 22) The magnetic field inside a solenoid and in the vicinity of its middle (a) is fairly uniform. (b) is non-uniform. (c) has a magnitude of B = μon I where n is its number of turns. (d) has a magnitude of B = μon I where n is its number of turns per unit length. (e) a & d. 23) A solenoid is 14.0cm long and has 2800. turns. A current of 5.00A flows through it. The magnetic field strength inside and near its middle is (a) 0.0176T. (b) 0.126T. (c) 0.00126T. click here 24) The magnetic field strength inside and at the middle of a 8.0cm long solenoid is 0.377T and it carries a 5.00-Amp current. The number of turns of the solenoid is (a) 4,800 turns. (b) 12,000 turns. (c) 6,000 turns. 25) The formula for the magnetic field strength, B, at the center of a coil, not a solenoid, than has N turns and carries a current, I, is (a) B = Nμo I / (2R). (b) B = Nμo I / (2πR). click here Magnetic Force between Two Parallel and Current-carrying Wires: The figure on the right shows two parallel and infinite wires that are a distance d apart and carry positive currents I1 and I2. The magnetic field that I1 generates at a perpendicular distance d from it is labeled B1. B1 is perpendicular to I2. This makes the force of B1 on I2 to be directed toward wire 1 as labeled by F12. (If you are facing the board in a classroom and B is downward, positive charges going from left to right will first be pushed toward the board). Similarly, the force of B2 (caused by wire 2) on wire 1 labeled by F21 is toward wire 2 as shown. The magnitude of F12 is: F12 = I2ℓ2B1, and the magnitude of F21 is: F21 = I1ℓ1B2 Since the two forces are toward each other; therefore , the wires attract each other. Note that ℓ1 and ℓ2 are treated as vectors that show the direction of the moving charges in the wires. If we choose each as a unit of length, say 1m in SI, the calculated forces will be per meter of the wires. The figure on the right shows two parallel and infinite wires that are a distance d apart and carry positive currents I1 and I2. The magnetic field that I1 generates at a perpendicular distance d from it is labeled B1. B1 is perpendicular to I2. This makes the force of B1 on I2 to be directed toward wire 1 labeled by F12. (If you are facing the board in a classroom and B is downward, positive charges going from left to right will first be pushed toward the board). Similarly, the force of B2 (caused by wire 2) on wire 1 labeled by F21 is toward wire 2 as shown. The magnitude of F12 is: F12 = I2ℓ2B1 (1) The magnitude of F21 is: F21 = I1ℓ1B2 (2) Since the two forces are toward each other, the wires attract each other. For opposite currents they repel. Note that ℓ1 and ℓ2 are treated as vectors that show the direction of the moving charges in the wires. If we choose each as a unit of length, say 1m in SI, the calculated force will be force per meter of the wires. Since B1 = μoI1/(2πd), and B2 = μoI2/(2πd), (1) and (2) become F12 = μoI1I2ℓ2 /(2πd) and F21 = μoI1I2ℓ1 /(2πd) or, in general, force per unit length on each wire becomes: F/ℓ = μoI1I2/(2πd) (3) Definition of Ampere: |Based on Equation 3, if two infinite and parallel wires that are 1m apart carry the same current and exert a force of 2x10-7N on every meter of each other, the current in each is 1 Amp.| Biot-Savart Law for a Current Element: |Biot and Savart along with assistance from mathematician Laplace, figured out a way to show some similarity between the magnetic filed due to an infinite current carrying wire and the electric field of an infinite line of electric charges. The magnetic field that an infinite wire carrying current I generates is given by B1 = μoI/(2πR) If we write this as B1 = 2μoI/(4πR) and let k' = μo /(4π), then B1 becomes: B1 = 2k'I/R (4) We do not have a static line of charge to just create a total electric field at P, for example. We have a line of moving charges (Current I) that generates a total magnetic field at P. Here, we use Idℓ as a differential current element that creates a differential magnetic field dB at P. The strongest dB belongs to that differential current Idℓ that is just passing by C, the center of the circle shown. Biot and Savart showed that the magnitude of dB as given below is the correct and valid form: The current elements are of course continuous and not isolated as was the case for a line of electric charges. As θ increases and approaches 90°, The current element moves up and becomes closer to point P. When θ is exactly 90°, Idℓ is at C, and at its closest distance of R to point P. Find the magnetic field strength at a distance R from an infinite straight wire that carries a current I. Solution: We may use the above figure and add the contributions of all (Idℓ)'s from -∞ to +∞. To do this, it is better to use angle α . Let α vary from (-α1) to (+α2). Note that α and θ are complementary angles and sine of one is equal to cosine of the other. We will replace sinθ by cosα . Both dℓ and r must be expressed in terns of R and α . Since tanα = ℓ/R, then ℓ = Rtanα . Differentiation results in dℓ=Rsec2α dα . Since cosα = R/r, we get r = R/cosα or r = Rsecα . Substituting for sinθ , r, and dℓ, in dB, we get: If α1 and α2 are replaced by -π/2 and π/2 respectively, B becomes: B = μoI /(2πR) as expected. |Calculate the magnetic field strength at a distance y from the center of a circular loop of radius a that carries a current I. Solution: The current element Idℓ is shown in the figure on the right. For every Idℓ there is an opposite direction Idℓ on the opposite side of the loop. In this figure every dB has a horizontal and a vertical component. The horizontal components (dBx) add up, but the vertical ones (dBy) cancel due to symmetry. Each horizontal component is (dB)x = (dB)sinα The main formula is the Biot-Savart formula that expresses dB in terms of Idℓ. We may write: For points far from the center and still on the axis, let y → ∞ in which case the a2 of the denominator can be neglected and the result is: |Calculate the magnetic field strength, B at a typical point along the axis of a solenoid of length ℓ that contains N loops and carries a current Solution: We need to add the contribution (dB)x of each loop at a given point along the axis. Equation (*) above may be used. It is easier to divide the solenoid into small packs of loops each with a length dx as shown. If n = N/ℓ is the number of loops per unit length of the solenoid, then ndx is the number of loops within dx. The differential current that ndx carries is therefore (ndx)I. From the figure x = a tanθ. This makes dx = a sec2 θ dθ. Replacing I in (*) by nIdx = nIa sec2 θ dθ and x by a tanθ, results in: For an infinite solenoid, θ1 = -π/2 and θ2 = +π/2 that results in B = μonI as expected. Ampere was not comfortable with the work of Biot and Savart. He objected that the idea of current element Idℓ was not precise. He wrote the experimental formula B = μoI /(2πr) for the magnetic field around an infinite current-carrying wire as B(2πr) = μoI and stated that vector B is tangent to any circular path as shown in the figure and if its magnitude is multiplied by 2πr, it must be equal to μoI. Ampere was not comfortable with the work of Biot and Savart. He objected that the idea of current element Idℓ was not precise. He wrote the experimental formula B = μoI /(2πr) that we already know as B(2πr) = μoI and stated that vector B is tangent to any circular path as shown in the figure and if its magnitude is multiplied by 2πr, it must be equal to μoI. Even for any arbitrary closed path that encloses the wire, if we calculate the tangential components of B along that arbitrary path, the result is equivalent to μoI. The component of B along dℓ is nothing but B ∙dℓ = Bdℓ cosθ as shown. Integrating B ∙dℓ along the closed path around the wire can be set equal to μoI. This establishes the Ampere's Law as: where I is the current through the surface enclosed by the path. The following figure shows an arbitrary path around the wire and how the tangential component of B may be calculated at each point along the path. |Use Ampere's law to derive the formula for the field inside a solenoid. Solution: If we take a cross-sectional area of a solenoid that is cut in half along its axis, and choose a rectangular closed path that covers a length L of loops and apply the Ampere's law to the total current going through the closed path, B can be calculated along the axis of the solenoid. Referring to the figure on the right, we may write the line integral as: N = nL because n = # of loops per meter of the solenoid.
http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Chapters%2029%20and%2030.htm
13
95
The information in this section (and in Lab Manual Appendix C) is supplementary to the lecture part of the course, and contains material from a few of the lectures that is not contained in the textbook. It is included here to free students from having to meticulously copy formulae, and should be brought to lectures when it is being discussed in class. Half of the page is left blank, so that students can add their own notes or diagrams from reading and/or Energy is ``the ability to do work''. We can express energy in units of work, and there are several systems of units for doing this (see Lab Manual, Appendix D). We use the S.I. system in this course. There are 4 base dimensions from which all other quantities can be derived: length (L), mass (M), time (T) and temperature (). The important derived quantities are (dimensions are in parentheses and the corresponding S.I. units in square brackets): Energy and its Dimensions and Units - Force is the ``push'' required to accelerate unit mass (M) [kg] at a unit of distance per unit time per unit time (M L T-2) [kg m s-2 = 1N (Newton)]. - Work or Energy is a unit of force (M L s-2 = 1N (Newton)] displaced through a unit of distance: T-2 = Q) [ kg s-2 = N m = 1 J (Joule)]. - Power or Heat flux is the energy flow per time or the rate of energy flow (rate of work) (M T-3 = Q T-1) [ J s-1 = 1 W (Watt)]. - Heat Flux Density is rate of energy gained or lost unit surface area (M T-3 = Q Energy can be found in many forms, the 5 of greatest significance in climatic applications are: - RADIANT ENERGY: energy associated with electromagnetic waves, requires no medium (i.e. can travel through space) - KINETIC ENERGY: energy due to motion: m = mass (M), V = speed (L so, (KE) = T-2 = Q) - GEOPOTENTIAL ENERGY: energy derived from gravity: GPE = mgh m = mass (M), g = gravitational acceleration (L h = height (L) (GPE) = (M L T-2 L) = M T-2 = Q) - INTERNAL ENERGY: sensible heat (heat that can be felt) of a body due to random motion of molecules: IE = mcT m = mass (M), c = specific heat (Q ) T = (IE) = (M Q - LATENT HEAT: energy involved in phase changes of a substance, LE = mL m = mass (M), L = latent heat (Q = (M Q M-1 = Q) Energy can be neither created nor destroyed, but it can be converted from one form to another (i.e. between forms 1-5 above). Example: Solar radiation (form 1), heats surface (4), warms air which rises (2 and 3) and evaporates water (5). - Radiation -- heat transfer due to rapid oscillations of electromagnetic fields. May also be considered as waves. - Conduction -- heat transfer by internal molecular activity within a substance with no net external motion. Requires contact between molecules in a substance. Solids, especially metals, are good conductors of heat while liquids and gases are poor due to lower molecular density. In atmosphere conduction is negligible except within the first few millimeters from a surface. - Convection -- heat (and mass) transfer within a fluid by mass motion resulting in transport and mixing. Convection is very important in the atmosphere. Two types of convective motion exist: - Free - buoyancy due to thermal differences (war air is less dense, so will rise; cold air is more dense and will sink). - Forced - due to physical overturning via shear (air flowing over a rough surface induces vertical motion). Radiant energy incident on a body may be reflected, transmitted or absorbed, so that: r = reflectivity (the fraction of incident radiation that is reflected), t = transmissivity (the fraction a = absorptivity (the fraction absorbed), of the body for radiation of wave-length . This can also be applied to bands of many wavelengths. For the band of solar radiation r is called the albedo (). Relates the way in which the ``emissive power'' (total energy emitted by a body) of a black body (a perfect emitter) is dependent upon its temperature at all wavelengths. This is the law from which Wien's Law and the Stephan-Boltzmann Law are derived. States that a rise in temperature of a body increases its emissive power and also increases the proportion of shorter wave lengths which it emits: = wavelength of maximum emission (m) T0 = surface temperature (K) States that the total energy emitted by a black body, integrated over all wavelengths, is proportional to the fourth power of its absolute temperature (Temperature in Kelvins): where I = energy emitted by the black body, = = 5.67 x 10-8W For non-blackbodies inclusion of the surface emissivity ) allows calculation of the emission: Assuming no transmission through a body, it follows that for a given wavelength and temperature the absorptivity of a body equals its The solar constant (I0) is the amount of solar radiation received from outside the Atmosphere in one unit time on one unit surface area placed perpendicular to the solar beam (as in plane CD in the diagram below) at the Earth's mean distance from the Sun. Present estimates suggest a value of 1367 W m-2 which therefore represents the upper limit for solar radiation receipt in the Earth-Atmosphere system. Only locations where the Sun can be directly overhead can receive this value (e.g. in the Tropics). The extra-terrestrial solar radiation (I) received at all other latitudes is less than I0 and is given by the cosine law of illumination. For example I on the plane AB (representing the top of the Atmosphere) in the following figure is given: I = I0cosZ where I0 = solar radiation received on plane CD perpendicular to beam (i.e. conforms to definition of solar constant), Z = angle between the beam and a perpendicular to the surface (zenith angle); Radiation incident on a surface. The zenith angle depends on latitude, season and time of day and gives rise to considerable variation in the amount of energy received over the exterior of the planet. Further variations in solar radiation receipt at the surface are due to differing path lengths of radiation through the atmosphere due to Earth-Sun geometry and the effectiveness of atmospheric attenuation. - Absorption: the atmosphere is a relatively poor and selective absorber of shortwave. The principal agents are O3, cloud droplets, particles and water vapour. - Scattering: small gaseous molecules scatter or diffuse shortwave radiation. The shortest wavelengths are preferentially - Reflection: like a mirror, solar radiation is reflected from larger particles and dominantly by clouds. The reflectivity, or albedo (, ranges between 0 and 1 with a maximum value of 1 for a perfect reflector like a mirror) of cloud tops is between 0.4 and 0.8 with a mean of 0.55. Solar radiation reaching the surface ( K ) has 2 components: direct-beam (S) as a parallel stream from the solar disc, and diffuse (D) from all points of the sky hemisphere (having been scattered and reflected during passage through the Atmosphere) For opaque surfaces (transmissivity t = 0) shortwave radiation is either reflected or absorbed. Reflection ( K ) depends on the amount of incident radiation ( K ) and the surface The absorbed (or net) shortwave radiation at the surface (K*) is All bodies with temperature above absolute zero radiate energy consistent with their surface temperature and surface emissivity as given by Stefan-Boltzmann's Law: At temperatures typical in the Earth-Atmosphere system the wave-length of emission of a body corresponds to infra-red or longwave radiation. The Atmosphere is a relatively good (but still selective) absorber of this longwave radiation. Important absorbers are: water vapour (5-8 m and at > 15m); CO2 (13-17 m and 4.3 m); O3 (9.4-9.8 m and 15 m); and cloud droplets / particles at almost all longwave wavelengths. There is a significant gap between 8 and 13 m - called the ``atmospheric window''. Overall the atmosphere is a relatively good absorber of longwave radiation allowing comfortable living conditions to exist on Earth, hence the so-called Greenhouse (or better Atmosphere) Effect. The Atmosphere largely allows shortwave in but effectively traps a lot of longwave emitted from the surface. This results in a warming of the atmosphere which increases L (back radiation) from the atmosphere to the Earth's surface. The back radiation effectively warms the average surface temperature by 33 K over what it would be if the atmosphere did not exist. Kirchhoff's law tells us that a good absorber is a good radiator at the same wavelength. The Atmosphere radiates longwave, some to space and some to the ground ( L ). The surface also radiates longwave to the atmosphere ( The surface longwave radiation budget therefore consists of two if the surface is a black body ( = 1): but if the surface is a `grey body' with < 1.0 we need to allow for less emission but some reflection (reflection = (1 - )L ): + (1 - L < L therefore L* is negative. With clear skies L* is typically about -100 W m-2 for surfaces with an unobstructed view of the sky. The addition of cloud and/or horizon obstructions (e.g. trees, leaves, buildings, topography) substantially lessens or even reverses the radiation losses by increasing The net all-wave radiative budget of a surface (i.e. whether it is gaining or losing energy over all wavelengths) is the net result of the short- and longwave budgets, so that during the day: ||K* + L* ||K - K + L - L Q* = L* as K* = 0. Normally by day the radiative budget of a surface is in surplus (Q* is positive) and by night in deficit (Q* is negative). Since a surface is a ``massless plane'' (so thin that it has no mass) it can have no heat content and therefore the Law of Conservation of Energy requires that the radiative energy imbalances be dissipated. This is accomplished via convection or conductive exchanges towards or away from the surface: Q* = QH + QE + QG QH, QE are the convective transfers of sensible and latent heat to or from the atmosphere respectively and QG is the conductive transfer of sensible heat to or from the ground. Each of these fluxes may be an energy gain (when directed towards) or a loss (directed away) for the surface. The sign convention for QH, QE, QG is that positive values represent energy flows away from the surface, and negative values represent energy flows toward the surface. Thus a surplus of radiant energy at the surface (positive Q*) results in the flow of sensible or latent heat away from the surface (positive QH, QE and/or QG). In moist environments the daytime radiative energy surplus is primarily dissipated as latent heat through the evaporation of water from the surface (QE). A continuous transfer of water occurs through transport and phase changes between subsystems of the Earth-Atmosphere system. Water is evapotranspired (evaporated and transpired through plant respiration) into the atmosphere in response to the local energy balance. Uplift leads to cooling, condensation and precipitation (rain, snow, etc.) thus returning the water to the surface again. There are also transports of water within the atmosphere by advection, across land by river run-off, and through the ground as ground water. where, p = precipitation, E = evapotranspiration, f = infiltration, r - net runoff and S net soil moisture storage. Usually p is the sole input, the other terms are outputs or storage terms, but it is possible for E (as dewfall), r, or irrigation to be inputs. p = E + f + r p = E + r + S The Energy and Water Balances are linked by the energy required to change the phase of water: QE = LVE where, LV = latent heat of vaporization. Since p is not a continuous input but occurs as an on/off process Water Budgets normally refer to periods of few days or longer. Our understanding of convective transfer is greatly aided by knowledge of 3 simple thermodynamic laws: Ideal Gas Law; Hydrostatic Law; 1st Law of Thermodynamics - Ideal Gas Law (Equation of State for the Atmosphere) Gases consist of minute molecules in a state of irregular motion. The pressure (P) of a gas results from the impacts of these molecules and for one unit volume of gas, depends on: a) and b) define the density () of the gas (ie. no. x ), and c) depends upon the gases temperature (T). These are related by: - number of molecules - mass of molecules - speed of molecules where R is the Specific Gas Constant ( R = where R* is the universal gas constant = 8.314 kg K-1; and Mair is the molecular weight of ``air'' = .028 kg (This is the same as the more familiar form: PV = n that n, the number of moles of gas is replaced by where m is the mass of gas under consideration.) - Hydrostatic Law Consider an air column divided into thin horizontal slices of thickness z with cross sectional area a. Then the volume of a slice is az. If is air density, the mass of the slice is az. If the acceleration due to gravity is g, the force on the slice (F=ma) is gaz and the pressure (pressure is force per unit area) at A due to the slice = gz. Thus the pressure difference P decreases with height: where the sign indicates that P decreases as z increases. This is the Hydrostatic Equation, and is also a statement of hydrostatic balance: the vertical decrease of pressure tending to cause uplift is balanced by the downward weight of the air. Hence there is no vertical acceleration and the atmosphere does not float away (luckily!). - 1st Law of Thermodynamics This is a Law of Conservation of Energy, and a statement of the physical changes resulting when heat is added to, or taken away from a gas. For solids, recall that there is a direct relationship between the heat added (Q) and the corresponding temperature change (T): With a gas we have to consider whether the gas is capable of expansion. If not, (ie. V is constant) addition of heat will result in a greater value of T than if it is able to expand because in the latter case some of the energy is used to do the work of expansion. So for gases we have two specific heats: one for constant volume (CV), and the other for constant pressure (CP), and the 1st Law of Thermodynamics is: The second form, involving CP is most useful, because the changes T, and P are easy to measure, whereas is not. The value of CP for air is 1010 J CP, CV, and R (the gas constant for air) are related by: ||work due to Motion in the atmosphere behaves according to Newton's First Law: - What primary laws of physics does atmospheric motion obey? - What are meant by real and apparent forces? - What real forces are important in the atmosphere? What is - What apparent forces are important in the atmosphere? What is their affect? - How does wind (horizontal motion) result from a balance of the forces acting on an air parcel? - Geostrophic, Gradient, and Cyclostrophic winds are approximations to the full equation of motion. When are each of these valid, and what are the assumptions upon which these models In the absence of forces, a body in motion will remain in motion. but it is Newton's second law that provides the basis for the equations of motion: which can be re-expressed as: = force, and m = mass. So, what are the important forces in the atmosphere? Apparent forces are due to the earth's rotation. The earth is a non-inertial frame of reference -- the frame of reference itself is accelerating because of the rotation. Because of this, we must add these apparent forces to the equation of motion on a sphere: The force balance in the atmosphere relates the acceleration of an air parcel to the sum of forces acting on it: - Coriolis Force ( ). The Coriolis force varies with latitude, is proportional to and acts to the right of the velocity (in the northern hemisphere). It is expressed as fV, where f is called the Coriolis parameter and is equal to 2 times the rotation rate of the earth times the sine of the latitude. [NOTE: The textbook uses the symbol FC to represent - Centrifugal Force. The centrifugal force is included in gravity. If we use a natural coordinate system, where s is the coordinate along the direction of motion, n is the coordinate to the left and perpendicular to the direction of motion, points in the direction of s, and points in the direction of n. Natural coordinate system. represents the wind vector. is the unit vector pointing in the direction of motion. is the unit vector pointing perpendicular to the direction of motion. is a centrifugal force for curved flow, and is equal to where R is the radius of curvature and is > 0 for curvature to the left and < 0 for curvature to the right. So we have, in natural coordinates: How do the forces fit into the natural coordinate system? The Pressure Gradient Force can have components along and across the direction of The Friction Force always acts in a direction opposite to the velocity: The Coriolis Force always acts to the right of the motion (in the northern hemisphere): = - Ff So, if we add all of these forces up, and write them as two equations, one with force components in the direction of motion , and one with force components perpendicular to the direction of motion the force balance: = - fV which in the direction (along the flow) gives: and in the direction (across the flow): Imagine an initial state with a horizontal pressure gradient and no motion. Equation 2 implies that = 0 therefore, the initial flow will be down the pressure gradient (across the isobars). As time proceeds, V becomes non-zero, and the wind accelerates until a three way balance exists between Assume that the flow is straight (ie. = 0), and frictionless (Ff = 0). These conditions will generally be met to within a few percent above a couple of kilometers, and in regions where the isobars are straight. Equation 1 implies that = 0 which means that the flow is parallel to the isobars. Equation 2 implies that the Pressure gradient force is balanced by the Coriolis force: which is the Geostrophic Wind relationship. An alternate expression can be found for the geostrophic wind by using the height gradient form of the pressure gradient force: The gradient wind approximation relaxes the requirement for straight flow. Again, equation 1 implies flow parallel to the isobars. For anti-clockwise curvature (R > 0) (cyclonic) equation 2 is: and for clockwise curvature (R < 0) (anticyclonic) equation 2 So, for the same pressure gradient, anticyclonically curved flow will have stronger winds than straight flow, which will have stronger winds that cyclonically curved flow. However in practice, pressure gradients are weaker around anticyclones. Imagine, a curved circulation which exists on small time and length scales, ignoring friction. In this case, we can ignore the Coriolis force. The pressure gradient force is balanced by the centrifugal force. The flow can be either cyclonic or anticyclonic. This equation describes the circulation in a tornado. Copyright © 2013 by Peter L. Jackson
http://cirrus.unbc.ca/201/manlec/node3.html
13
59
Students should come to the study of geometry in the middle grades with informal knowledge about points, lines, planes, and a variety of two-and three-dimensional shapes; with experience in visualizing and drawing lines, angles, triangles, and other polygons; and with intuitive notions about shapes built from years of interacting with objects in their daily lives. In middle-grades geometry programs based on these recommendations, students investigate relationships by drawing, measuring, visualizing, comparing, transforming, and classifying geometric objects. Geometry provides a rich context for the development of mathematical reasoning, including inductive and deductive reasoning, making and validating conjectures, and classifying and defining geometric objects. Many topics treated in the Measurement Standard for the middle grades are closely connected to students' study of geometry. Middle-grades students should explore a variety of geometric shapes and examine their characteristics. Students can conduct these explorations using materials such as geoboards, dot paper, multiple-length cardboard strips with hinges, and dynamic geometry software to create two-dimensional shapes. Students must carefully examine the features of shapes in order to precisely define and describe fundamental shapes, such as special types of quadrilaterals, and to identify relationships among the types of shapes. A teacher might ask students to draw several parallelograms on a coordinate grid or with dynamic geometry software. Students should make and record measurements of the sides and angles to observe some of the characteristic features of each type of parallelogram. They should then generate definitions for these shapes that are correct and consistent with the commonly used ones and recognize the principal relationships among elements of these parallelograms. A Venn diagram like the one shown in figure 6.14 might be used to summarize observations that a square is a special case of a rhombus and rectangle, each of which is a special case of a parallelogram. The teacher might also ask students to draw the diagonals of multiple examples of each shape, as shown in figure 6.15, and then measure the lengths of the diagonals and the angles they form. The results can be summarized in a table like that in figure 6.16. Students should observe that the diagonals of these parallelograms bisect each other, which they might propose as a defining characteristic of a parallelogram. Moreover, they might observe, the diagonals are perpendicular in rhombuses (including squares) but not in other parallelograms and the diagonals are of equal length in rectangles (including squares) but not in other parallelograms. These observations might suggest other defining characteristics of special quadrilaterals, for instance, that a square is a parallelogram with diagonals that are perpendicular and of equal » length. Using dynamic geometry software, students could explore the adequacy of this definition by trying to generate a counterexample. Students can investigate congruence and similarity in many settings, including art, architecture, and everyday life. For example, observe the overlapping pairs of triangles in the design of the kite in figure 6.17. The overlapping triangles, which have been disassembled in the figure, can be shown to be similar. Students can measure the angles of the triangles in the kite and see that their corresponding angles are congruent. They can measure the lengths of the sides of the triangles and see that the differences are not constant but are instead related by a constant scale factor. With the teacher's guidance, students can thus begin to develop a more formal definition of similarity in terms of relationships among sides and angles. Investigations into the properties of, and relationships among, similar shapes can afford students many opportunities to develop and evaluate conjectures inductively and deductively. For example, an investigation of the perimeters, areas, and side lengths of the similar and » congruent triangles in the kite example could reveal relationships and lead to generalizations. Teachers might encourage students to formulate conjectures about the ratios of the side lengths, of the perimeters, and of the areas of the four similar triangles. They might conjecture that the ratio of the perimeters is the same as the scale factor relating the side lengths and that the ratio of the areas is the square of that scale factor. Then students could use dynamic geometry software to test the conjectures with other examples. Students can formulate deductive arguments about their conjectures. Communicating such reasoning accurately and clearly prepares students for creating and understanding more-formal proofs in Geometric and algebraic representations of problems can be linked using coordinate geometry. Students could draw on the coordinate plane examples of the parallelograms discussed previously, examine their characteristic features using coordinates, and then interpret their properties algebraically. Such an investigation might include finding the slopes of the lines containing the segments that compose the shapes. From many examples of these shapes, students could make important observations about the slopes of parallel lines and perpendicular lines. Figure 6.18 helps illustrate for one specific rhombus what might be observed in general: the slopes of parallel lines (in this instance, the opposite sides of the rhombus) are equal and the slopes of perpendicular lines (in this instance, the diagonals of the rhombus) are negative reciprocals. The slopes of the diagonals are Transformational geometry offers another lens through which to investigate and interpret geometric objects. To help them form images of shapes through different transformations, students can use physical objects, figures traced on tissue paper, mirrors or other reflective surfaces, figures drawn on graph paper, and dynamic geometry software. They should explore the characteristics of flips, turns, and slides and should investigate relationships among compositions of transformations. These experiences should help students develop a strong understanding of line and rotational symmetry, scaling, and properties of polygons. From their experiences in grades 35, students should know that rotations, slides, and flips produce congruent shapes. By exploring the positions, side lengths, and angle measures of the original and resulting figures, middle-grades students can gain new insights into congruence. They could, for example, note that the images resulting from transformations have different positions and sometimes different orientations » from those of the original figure (the preimage), although they have the same side lengths and angle measures as the original. Thus congruence does not depend on position and orientation. Transformations can become an object of study in their own right. Teachers can ask students to visualize and describe the relationship among lines of reflection, centers of rotation, and the positions of preimages and images. Using dynamic geometry software, students might see that each point in a reflection is the same distance from the line of reflection as the corresponding point in the preimage, as shown in figure 6.19a. In a rotation, such as the one shown in figure 6.19b, students might note that the corresponding vertices in the preimage and image are the same distance from the center of rotation and that the angles formed by connecting the center of rotation to corresponding pairs of vertices are congruent. Transformations can also be used to help students understand similarity and symmetry. Work with magnifications and contractions, called dilations, can support students' developing understanding of similarity. For » example, dilation of a shape affects the length of each side by a constant scale factor, but it does not affect the orientation or the magnitude of the angles. In a similar manner, rotations and reflections can help students understand symmetry. Students can observe that when a figure has rotational symmetry, a rotation can be found such that the preimage (original shape) exactly matches the image but its vertices map to different vertices. Looking at line symmetry in certain classes of shapes can also lead to interesting observations. For example, isosceles trapezoids have a line of symmetry containing the midpoints of the parallel opposite sides (often called bases). Students can observe that the pair of sides not intersected by the line of symmetry (often called the legs) are congruent, as are the two corresponding pairs of angles. Students can conclude that the diagonals are the same length, since they can be reflected onto each other, and that several pairs of angles related to those diagonals are also congruent. Further exploration reveals that rectangles and squares also have a line of symmetry containing the midpoints of a pair of opposite sides (and other lines of symmetry as well) and all the resulting properties. Students' skills in visualizing and reasoning about spatial relationships are fundamental in geometry. Some students may have difficulty finding the surface area of three-dimensional shapes using two-dimensional representations because they cannot visualize the unseen faces of the shapes. Experience with models of three-dimensional shapes and their two-dimensional "nets" is useful in such visualization (see fig. 6.25 in the "Measurement" section for an example of a net). Students also need to examine, build, compose, and decompose complex two-and three-dimensional objects, which they can do with a variety of media, including paper-and-pencil sketches, geometric models, and dynamic geometry software. Interpreting or drawing different views of buildings, such as the base floor plan and front and back views, using dot paper can be useful in developing visualization. Students should build three-dimensional objects from two-dimensional representations; draw objects from a geometric description; and write a description, including its geometric properties, for a given object. Students can also benefit from experience with other visual models, such as networks, to use in analyzing and solving real problems, such as those concerned with efficiency. To illustrate the utility of networks, students might consider the problem and the networks given in figure 6.21 (adapted from Roberts [1997, pp. 1067]). The teacher could ask students to determine one or several efficient routes that Caroline might use for the streets on map A, share their solutions with the class, and describe how they found them. Students should note the start-end point of each route and the number of different routes that they find. Students could then find an efficient route for map B. They should eventually conclude that no routes in map B satisfy the conditions of the problem. They should discuss why no such route can be found; the teacher might suggest that students count the number of paths attached to each node and look at where they "get stuck" in order to » understand better why they reach an impasse. To extend this investigation, students could look for efficient paths in other situations or they might change the conditions of the map B problem to find the pathway with the least backtracking. Such an investigation in the middle grades is a precursor of later work with Hamiltonian circuits, a foundation for work with sophisticated networks. Visual demonstrations can help students analyze and explain mathematical relationships. Eighth graders should be familiar with one of the many visual demonstrations of the Pythagorean relationshipthe diagram showing three squares attached to the sides of a right triangle. Students could replicate some of the other visual demonstrations of the relationship using dynamic geometry software or paper-cutting procedures, and then discuss the associated reasoning. Geometric models are also useful in representing other algebraic relationships, such as identities. For example, the visual demonstrations of the identity (a + b)2 = a2 + 2ab + b2 in figure 6.22 makes it easy to remember. A teacher might begin by asking students to draw a square with side lengths (2 + 5). Students could then partition the square as shown in fig. 6.22a, calculate the area of each section, and finally represent the total area. Students could then apply this approach to the » general case of a square with sides of length (a + b), as shown in figure 6.22b, which demonstrates the identity (a + b)2 = a2 + 2ab + b2. Many investigations in middle-grades geometry can be connected to other school subjects. Nature, art, and the sciences provide opportunities for the observation and the subsequent exploration of geometry concepts and patterns as well as for appreciating and understanding the beauty and utility of geometry. For example, the study in nature or art of golden rectangles (i.e., rectangles in which the ratio of the lengths is the golden ratio, (1 + )/2) or the study of the relationship between the rigidity of triangles and their use in construction helps students see and appreciate the importance of geometry in our world. |Home | Table of Contents | Purchase | Resources| |NCTM Home | Illuminations Web site| Copyright © 2000 by the National Council of Teachers of Mathematics.
http://www.fayar.net/east/teacher.web/Math/Standards/document/chapter6/geom.htm
13
138
Basic Trigonometry II A few more examples using SOH CAH TOA Basic Trigonometry II ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - Let's just do a ton of more examples, - just so we make sure that we're getting this trig function thing down well. - So let's construct ourselves some right triangles. - Let's construct ourselves some right triangles, - and I want to be very clear. - The way I've defined it so far, this will only work in right triangles. - So if you're trying to find the trig functions of angles that aren't part of right triangles, - we're going to see that we're going to have to construct right triangles, - but let's just focus on the right triangles for now. - So let's say that I have a triangle, - where let's say this length down here is seven, - and let's say the length of this side up here, - let's say that that is four. - Let's figure out what the hypotenuse over here is going to be. - So we know -let's call the hypotenuse, "h"- - we know that h squared is going to be equal to seven squared plus four squared, - we know that from the Pythagorean theorem, - that the hypotenuse squared is equal to - the square of each of the sum of the squares of the other two sides. - h squared is equal to seven squared plus four squared. - So this is equal to forty-nine plus sixteen, - forty-nine plus sixteen, - forty nine plus ten is fifty-nine, plus six is sixty-five. - It is sixty five. So this h squared, - let me write: h squared -that's different shade of yellow- - so we have h squared is equal to sixty-five. - Did I do that right? Forty nine plus ten is fifty nine, plus another six is sixty-five, - or we could say that h is equal to, if we take the square root of both sides, - square root - square root of sixty five. And we really can't simplify this at all. - This is thirteen. - This is the same thing as thirteen times five, - both of those are not perfect squares and - they're both prime so you can't simplify this any more. - So this is equal to the square root of sixty five. - Now let's find the trig, let's find the trig functions for this angle up here. - Let's call that angle up there theta. - So whenever you do it - you always want to write down - at least for me it works out to write down - - "soh cah toa". - ...soh cah toa. I have these vague memories - of my trigonometry teacher. - Maybe I've read it in some book. I don't know - you know, some... about - some type of indian princess named "soh cah toa" or whatever, - but it's a very useful mnemonic, - so we can apply "soh cah toa". - Let's find, let's say we want to find the cosine. - We want to find the cosine of our angle. - We wanna find the cosine of our angle, you say: "soh cah toa!" - So the "cah". "Cah" tells us what to do with cosine, - the "cah" part tells us - that cosine is adjacent over hypotenuse. - Cosine is equal to adjacent over hypotenuse. - So let's look over here to theta; what side is adjacent? - Well we know that the hypotenuse, - we know that that hypotenuse is this side over here. - So it can't be that side. The only other side that's kind of adjacent to it that - isn't the hypotenuse, is this four. - So the adjacent side over here, that side is, - it's literally right next to the angle, - it's one of the sides that kind of forms the angle - it's four over the hypotenuse. - The hypotenuse we already know is square root of sixty-five. - so it's four over the square root of sixty-five. - And sometimes people will want you to rationalize the denominator which means - they don't like to have an irrational number in the denominator, - like the square root of sixty five, - and if they - if you wanna rewrite this without a irrational number in the denominator, - you can multiply the numerator and the denominator - by the square root of sixty-five. - This clearly will not change the number, - because we're multiplying it by something over itself, - so we're multiplying the number by one. - That won't change the number, but at least it gets rid of the irrational number in the denominator. - So the numerator becomes - four times the square root of sixty-five, - and the denominator, square root of 65 times square root of 65, is just going to be 65. - We didn't get rid of the irrational number, it's still there, but it's now in the numerator. - Now let's do the other trig functions - or at least the other core trig functions. - We'll learn in the future that there's actually a ton of them - but they're all derived from these. - so let's think about what the sign of theta is. Once again go to "soh cah toa". - The "soh" tells what to do with sine. Sine is opposite over hypotenuse. - Sine is equal to opposite over hypotenuse. - Sine is opposite over hypotenuse. - So for this angle what side is opposite? - We just go opposite it, what it opens into, it's opposite the seven - so the opposite side is the seven. - This is, right here - that is the opposite side - and then the hypotenuse, it's opposite over hypotenuse. - The hypotenuse is the square root of sixty-five. - Square root of sixty-five. - and once again if we wanted to rationalize this, - we could multiply times the square root of 65 over the square root of 65 - and the the numerator, we will get seven square root of 65 - and in the denominator we will get just sixty-five again. - Now let's do tangent! - Let us do tangent. - So if i ask you the tangent - of - the tangent of theta - once again go back to "soh cah toa". - The toa part tells us what to do with tangent - it tells us... - it tells us that tangent - is equal to opposite over adjacent - is equal to opposite over - opposite over adjacent - So for this angle, what is opposite? We've already figured it out. - it's seven. It opens into the seven. - It is opposite the seven. - So it's seven over what side is adjacent. - well this four is adjacent. - This four is adjacent. So the adjacent side is four. - so it's seven over four, - and we're done. - We figured out all of the trig ratios for theta. let's do another one. - Let's do another one. - i'll make it a little bit concrete 'cause right now we've been saying, - "oh, what's tangent of x, tangent of theta." let's make it a little bit more concrete. - Let's say... - let's say, let me draw another right triangle, - that's another right triangle here. - Everything we're dealing with, these are going to be right triangles. - let's say the hypotenuse has length four, - let's say that this side over here has length two, - and let's say that this length over here is going to be two times the square root of three. - We can verify that this works. - If you have this side squared, so you have - let me write it down - - two times the square root of three squared - plus two squared, is equal to what? - this is two. There's going to be four times three. - four times three plus four, - and this is going to be equal to twelve plus four is equal to sixteen - and sixteen is indeed four squared. So this does equal four squared, - it does equal four squared. It satisfies the pythagorean theorem - and if you remember some of your work from 30 60 90 triangles - that you might have learned in geometry, - you might recognize that this is a 30 60 90 triangle. - This right here is our right angle, - - i should have drawn it from the get go to show that this is a right triangle - - this angle right over here is our thirty degree angle - and then this angle up here, this angle up here is - a sixty degree angle, - and it's a thirty sixteen ninety because - the side opposite the thirty degrees is half the hypotenuse - and then the side opposite the 60 degrees is a squared of 3 times the other side - that's not the hypotenuse. - So that said, we're not gonna ... - this isn't supposed to be a review of 30 60 90 triangles although i just did it. - Let's actually find the trig ratios for the different angles. - So if i were to ask you or if anyone were to ask you, what is... - what is the sine of thirty degrees? - and remember 30 degrees is one of the angles in this triangle but it would apply - whenever you have a 30 degree angle and you're dealing with the right triangle. - We'll have broader definitions in the future but if you say sine of thirty degrees, - hey, this angle right over here is thirty degrees so i can use this right triangle, - and we just have to remember "soh cah toa" - We rewrite it. soh, cah, toa. - "sine tells us" (correction). soh tells us what to do with sine. sine is opposite over hypotenuse. - sine of thirty degrees is the opposite side, - that is the opposite side which is two over the hypotenuse. - The hypotenuse here is four. - it is two fourths which is the same thing as one-half. - sine of thirty degrees you'll see is always going to be equal to one-half. - now what is the cosine? - What is the cosine of thirty degrees? - Once again go back to "soh cah toa". - The cah tells us what to do with cosine. - Cosine is adjacent over hypotenuse. - So for looking at the thirty degree angle it's the adjacent. - This, right over here is adjacent. it's right next to it. - it's not the hypotenuse. it's the adjacent over the hypotenuse. - so it's two square roots of three - adjacent over...over the hypotenuse, over four. - or if we simplify that, we divide the numerator and the denominator by two - it's the square root of three over two. - Finally, let's do the tangent. - The tangent of thirty degrees, - we go back to "soh cah toa". - soh cah toa - toa tells us what to do with tangent. It's opposite over adjacent - you go to the 30 degree angle because that's what we care about, tangent of 30. - tangent of thirty. Opposite is two, - opposite is two and the adjacent is two square roots of three. - It's right next to it. It's adjacent to it. - adjacent means next to. - so two square roots of three - so this is equal to... the twos cancel out - one over the square root of three - or we could multiply the numerator and the denominator by the square root of three. - So we have square root of three over square root of three - and so this is going to be equal to the numerator square root of three and then - the denominator right over here is just going to be three. - So that we've rationalized a square root of three over three. - Fair enough. - Now lets use the same triangle to figure out the trig ratios for the sixty degrees, - since we've already drawn it. - so what is... what is the sine of the sixty degrees? - and i think you're hopefully getting the hang of it now. - Sine is opposite over adjacent. soh from the "soh cah toa". - for the sixty degree angle what side is opposite? - what opens out into the two square roots of three, - so the opposite side is two square roots of three, - and from the sixty degree angle the adj-oh sorry - its the opposite over hypotenuse, don't want to confuse you. - so it is opposite over hypotenuse - so it's two square roots of three over four. four is the hypotenuse. - so it is equal to, this simplifies to square root of three over two. - What is the cosine of sixty degrees? cosine of sixty degrees. - so remember "soh cah toa". cosine is adjacent over hypotenuse. - adjacent is the two sides, right next to the sixty degree angle. - So it's two over the hypotenuse which is four. - So this is equal to one-half - and then finally, what is the tangent? - what is the tangent of sixty degrees? - Well tangent, "soh cah toa". Tangent is opposite over adjacent - opposite the sixty degrees - is two square roots of three - two square roots of three - and adjacent to that - adjacent to that is two. - Adjacent to sixty degrees is two. - So its opposite over adjacent, two square roots of three over two - which is just equal to the square root of three. - And I just wanted to -look how these are related- - the sine of thirty degrees is the same as the cosine of sixty degrees. - The cosine of 30 degrees is the same thing as the sine of 60 degrees - and then these guys are the inverse of each other - and i think if you think a little bit about this triangle - it will start to make sense why. - we'll keep extending this and Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/math/trigonometry/basic-trigonometry/basic_trig_ratios/v/basic-trigonometry-ii
13
62
A Graphics object encapsulates the following state information that is needed for basic rendering operations. |"Coordinates are infinitely thin and lie between the pixels of the Operations which draw the outline of a figure operate by traversing an infinitely thin path between pixels with a pixel-sized pen that hangs down and to the right of the anchor point on the path. Operations which fill a figure operate by filling the interior of that infinitely thin path. Operations which render horizontal text render the ascending portion of character glyphs entirely above the baseline coordinate." If you draw a figure that covers a given rectangle, that figure occupies one extra row of pixels on the right and bottom edges as compared to filling a figure that is bounded by that same rectangle. We saw an example of this in a previous lesson where we drew a rectangle with the same dimensions as the Canvas object on which it was drawn, and discovered that the right and bottom borders of the rectangle hung off the edge of the object and were not visible. Another implication is that if you draw a horizontal line along the same Y-coordinate as the baseline of a line of text, that line will be drawn entirely below the text, except for any descenders. When you pass coordinates to the methods of a Graphics object, they are considered relative to the translation origin of the Graphics object prior to the invocation of the method. A Graphics object describes a graphics context. A graphics context has a current clip. Any rendering operation that you perform will modify only those pixels which lie within the area bounded by the current clip of the graphics context and the component that was used to create the Graphics object. When you draw or write, that drawing or writing is done in the current color using the current paint mode in the current font. Numerous other classes, such as the Rectangle class and the Polygon class are used in support of operations involving the Graphics class. You also receive a Graphics context as a parameter whenever you override either the paint() or update() methods. copyArea(int, int, int, int, int, int) - Copies an area of the component specified by the first four parameters to another location on the graphics context at a distance specified by the last two parameters. create() - Creates a new Graphics object that is a copy of the Graphics object on which it is invoked. dispose() - Disposes of the graphics context on which it is invoked and releases any system resources that it is using. This includes system resources other than memory. A Graphics object cannot be used after dispose has been called. It is important that you manually dispose of your Graphics objects (created directly from a component or other Graphics object) when you no longer need them rather than to wait for finalization. finalize() - Disposes of this graphics context once it is no longer referenced. getColor() - Gets this graphics context's current color. setColor(Color) - Sets this graphics context's current color to the specified color. Subsequent graphics operations using this graphics context use this specified color. setPaintMode() - Sets the paint mode of this graphics context to overwrite the destination with this graphics context's current color (as opposed to XORMODE). Subsequent rendering operations will overwrite the destination with the current color. setXORMode(Color) - Sets the paint mode of this graphics context to alternate between this graphics context's current color and the new specified color. toString() - Returns a String object representing this Graphics object's value. translate(int, int) - Translates the origin of the graphics context to the point (x, y) in the current coordinate system. drawPolyline(int, int, int) - Draws a sequence of connected lines defined by arrays of x and y coordinates. The figure will not be closed if the first point differs from the last point. drawRect(int, int, int, int) - Draws the outline of the specified rectangle using the current color of the graphics context.. fillRect(int, int, int, int) - Fills the specified rectangle with the context's current color. Be sure to check the documentation regarding the coordinates of the right edge and bottom edge of the rectangle before using. This comment applies to all the fill methods. drawRoundRect(int, int, int, int, int, int) - Draws an outlined round-cornered rectangle using this graphics context's current color. You might need to look at a book containing a diagram to learn how to specify how the corners are rounded. fillRoundRect(int, int, int, int, int, int) - Fills the specified rounded corner rectangle with the current color. draw3DRect(int, int, int, int, boolean) - Draws a 3-D highlighted outline of the specified rectangle. The edges of the rectangle are highlighted so that they appear to be beveled and lit from the upper left corner. The boolean parameter determines whether the rectangle appears to be raised above the surface or sunk into the surface. It is raised when the parameter is true. fill3DRect(int, int, int, int, boolean) - Paints a 3-D highlighted rectangle filled with the current color. drawOval(int, int, int, int) - Draws the outline of an oval in the current color. When the last two parameters are equal, this method draws a circle. fillOval(int, int, int, int) - Fills an oval bounded by the specified rectangle with the current color. As with drawOval(), when the last two parameters are equal, the method fills a circle. drawArc(int, int, int, int, int, int) - Draws the outline of a circular or elliptical arc covering the specified rectangle. You will probably need to examine the documentation to figure out how to specify the parameters for this method as well as the fillArc() method. fillArc(int, int, int, int, int, int) - Fills a circular or elliptical arc covering the specified rectangle. drawPolygon(Polygon) - Draws the outline of a polygon defined by the specified Polygon object. Another overloaded version is available that accepts a list of coordinate values to specify the polygon. The following description of a Polygon object was taken from the JavaSoft documentation for JDK 1.1.3. |"The Polygon class encapsulates a description of a closed, two-dimensional region within a coordinate space. This region is bounded by an arbitrary number of line segments, each of which is one side of the polygon. Internally, a polygon comprises of a list of (x, y) coordinate pairs, where each pair defines a vertex of the polygon, and two successive pairs are the endpoints of a line that is a side of the polygon. The first and final pairs of (x, y) points are joined by a line segment that closes the polygon."| drawChars(char, int, int, int, int) - Draws the text given by the specified character array, using this graphics context's current font and color. Another version lets you pass an array of bytes to represent the characters to be drawn. getFont() - Gets the current font and returns an object of type Font which describes the context's current font. getFontMetrics() - Gets the font metrics of the current font. Returns an object of type FontMetrics. Methods of the FontMetrics class can be used to obtain metrics information (size, etc.) about the font to which the getFontMetrics() method is applied. getFontMetrics(Font) - Gets the font metrics for the specified font. setFont(Font) - Sets this graphics context's font to the specified font. clipRect(int, int, int, int) - Intersects the current clip with the specified rectangle. This results in a clipping area that is the intersection of the current clipping area and the specified rectangle. Future rendering operations have no effect outside of the clipping area. This method can only be used to reduce the size of the clipping area. It cannot be used to increase the size of the clipping area. getClip() - Gets the current clipping area and returns it as an object of type Shape. Note that Shape is an interface. The following information and caution regarding the Shape interface was taken from the JavaSoft documentation for JDK 1.1.3: |Shape: The interface for objects which represent some form of geometric This interface will be revised in the upcoming Java2D project. It is meant to provide a common interface for various existing geometric AWT classes and methods which operate on them. Since it may be superseded or expanded in the future, developers should avoid implementing this interface in their own classes until it is completed in a later release. setClip(int, int, int, int) - Sets the current clip to the rectangle specified by the given coordinates. drawImage(Image, int, int, int, int, int, int, int, int, Color, ImageObserver) - Draws as much of the specified area of the specified image as is currently available, scaling it on the fly to fit inside the specified area of the destination drawable surface. In RGB format, the red, green, and blue components of a color are each represented by an integer in the range 0-255. The value 0 indicates no contribution from the associated primary color. A value of 255 indicates the maximum intensity of the primary color component. There is another color model called the HSB model (hue, saturation, and brightness). The Color class provides a set of convenience methods for converting between RGB and HSB colors. All that is required to use these variables for the specification of a color is to reference the color by variable name as illustrated in the following code fragment: One of these two constructors allows you to specify the contributions of red, green, and blue with integer values ranging between 0 and 255 where 0 represents no contribution of a particular primary color and 255 represents a maximum contribution of the primary color. The description of this constructor is: |Color(int, int, int) - Creates a color with the specified RGB components.| |Color(float, float, float) - Creates a color with the specified red, green, and blue values, where each of the values is in the range 0.0-1.0.| |Color(int) - Creates a color with the specified RGB value, where the red component is in bits 16-23 of the argument, the green component is in bits 8-15 of the argument, and the blue component is in bits 0-7.| hashCode() - Computes the hash code for this color. toString() - Creates a string that represents this color and indicates the values of its RGB components. getRed() - Gets the red component of this color as an integer in the range 0 to 255. getGreen() - Gets the green component of this color as an integer in the range 0 to 255. getBlue() - Gets the blue component of this color as an integer in the range of 0 to 255. getRGB() - Gets the RGB value representing the color. The red, green, and blue components of the color are each scaled to be a value between 0 and 255. Bits 24-31 of the returned integer are 0xff, bits 16-23 are the red value, bit 8-15 are the green value, and bits 0-7 are the blue value. brighter() - Creates a brighter version of this color. This method was used in an earlier lesson where I created a fake button and used this method to provide highlighting on the edges. darker() - Creates a darker version of this color. This method was used in an earlier lesson where I created a fake button and used this method to provide shadows on the edges. decode(String) - Converts a string to an integer and returns the specified color. getColor(String) - Finds a color in the system properties. The String object is used as the key value in the key/value scheme used to describe properties in Java. The value is then used to return a Color object. getColor(String, Color) - Finds a color in the system properties. Same as the previous method except that the second parameter is returned if the first parameter doesn't result in a valid Color object. getColor(String, int) - Finds a color in the system properties. Similar to the previous method except that the second parameter is used to instantiate and return a Color object. getHSBColor(float, float, float) - Creates a Color object based on values supplied for the HSB color model. HSBtoRGB(float, float, float) - Converts the components of a color, as specified by the HSB model, to an equivalent set of values for the RGB model. RGBtoHSB(int, int, int, float) - Converts the components of a color, as specified by the RGB model, to an equivalent set of values for hue, saturation, and brightness, the three components of the HSB model. One lesson will be dedicated to working with some of the utility methods in the Graphics class, and specific lessons will be dedicated to working with shapes, text, clipping, and images. In addition, the utility methods of the Graphics class and the methods and variables of the Color class will be used throughout those lessons.
http://www.dickbaldwin.com/java/Java162.htm
13
64
How a gyroscope works Cef's Website: click here A quick explanation of how a gimbaled gyro functions Figure 4 shows a simplified gyro that is gimbaled in a plane perpendicular to the tilting force. As the rim rotates through the gimbaled plane all the energy transferred to the rim by the tilting force is mechanically stopped. The rim then rotates back into the tilting force plane where it will be accelerated once more. Each time the rim is accelerated the axis moves in an arc in the tilting force plane. There is no change in the RPM of the rim around the axis. The gyro is a device that causes a smooth transition of momentum from one plane to another plane, where the two planes intersect along the axis. A more detailed explanation of how a gimbaled gyro functions Here I attempt to show how much the axis will rotate around a gimbaled axis. That is to say, how fast it rotates in the direction of a tilting force. In figure 4, the precession plane in the gimbaled example functions differently than in the above example of figures 1-3, and I have renamed it "stop the tilting force plane". The point masses at the rim are the only mass of the gyro system that is considered. The mass and gyroscope effect of the axis is ignored. At first consider only ˝ of the rim, the left half. The point masses inside the "stop the tilting force plane" share half their mass on either side of the plane, and add their combined, 1/4kg, mass to point mass A of 1/2kg. So then the total mass on the left side is ˝ the total mass of all 4 point masses, or 1kg. The tilting force will change the position of point mass B and D very little and change the position of point mass A the most. So we must use the average distance from the axis of all the mass on the left-hand side. Point mass A is rotating at 5 revolutions per second. This means that it is exposed to the tilting force for only .1 seconds. The tilting force of 1 Newton, if applied for .1 second, will cause the mass at the average distance to move .005 meter in an arc, in the tilting force plane. Since the length of the axis is twice as long as the average distance of the rim’s mass, the axis will move .01 meter in an arc. At the end of .1 second the point mass will be in the "stop the tilting force plane" and all the energy transferred to point mass A is lost in the physical restraint of the gimbal bearings. The same thing happens when point mass A is on the right side of figure 4. Only now, the tilting force will move point mass A down, and the axis will advance another .01meter. .01 meter every .1 second is not the whole story because the mass on the right side of the gyro hasn’t been considered. The right side has the same mass as the left and has the same effect on the axis as the left side does. So the axis will advance half as much, half of .01 meter, or .005meters. Both halves of the rim mass will pass through the stop the tilting force plane 10 times in one second. Each time a half of the rim passes though the "stop the tilting force plane", it losses all its momentum that was added by the tilting force. The mass has to undergo acceleration again so we continually calculate the effect that 1 Newton has for .1 second on the rim mass at the average distance, 10 times a second. So then; at the point that the 1 Newton force is applied, the axis will move 5cm per second along an arc. The gyro will rotate at .48 RPM within the tilting force plane. What considerations does the rim speed have on the distance that the axis will rotate along an arc in the tilting force plane? The gyro will rotate in the tilting force plane, half as fast if the rim speed is doubled. What happens when the mass of the rim is doubled? The gyro will rotate in the tilting force plane, half as fast if the rim mass is doubled How does the rim diameter effect rotation in the tilting force plane? The gyro will rotate in the tilting force plane, half as fast if the rim diameter is doubled The Math of a gimbaled gyro 1 Newton = 1kilogram 1 meter sec.2 d=1/2 X (a X t2 ) 1 Newton acting on 1kg will accelerate the mass at a rate of 1 meter sec^2 the time that ˝ the mass of the rim is exposed to the tilting force at 5 revolutions a second is 10 times a second or 1/10; .1 sec The distance, d, the mass will go in .1 sec The axis is twice as long as the distance from the average distance that the rim mass is calculated from .005 X 2 = .01 meters Now consider the other side of the gyro as acted on by the same 1 Newton The force will have ten times a second to accelerate the rim mass from a relative velocity of 0m /sec. Other experiments with a gyro
http://www.gyroscopes.org/how.asp
13
108
Special Sub-Topic: Circle Theorems |One of the easiest circle theorems to remember is to do with angles in a semi-circle. If two straight lines are drawn from either end of the diameter of a circle and meet at a point on the circumference, what will the angle always be?| 90 degrees. Two straight lines from either end of the diameter will always meet at a right angle to each other on the circumference. This applies regardless of the length of the diameter and circumference of the circle. |When an angle is formed from two points on the circumference and two straight lines drawn from these two points meet at another point on the same circumference, it is said that the angle is ________ by an arc.| Subtended. If an angle is subtended by an arc then the point on the circumference where the straight lines from the two points meet will have a certain angle. This can be measured with a protractor. If a second example of this happens whereby from the same two points on the circumference another meeting point is formed, then that second angle will be equal to the first angle. Angles are equal when subtended by the same arc (a=b etc). It is also possible to say that angles are equal when subtended by the same chord. The difference between a chord and an arc is, whether a straight line is drawn between the two points marked on the circumference. |What is the geometric name given to a straight line which touches the circumference of a circle at one point only?| Tangent. A tangent is the name of a straight line which touches the circumference of a circle at only one point. A tangent can meet the circumference at any point around a circle. |When the straight line described above meets the radius or diameter of a circle, what is the size of the angle always formed?| 90 degrees. This situation also leads to the two lines, the tangent and the radius, meeting at right angles to one another. Another way to describe this would be that the tangent is perpendicular to the radius of the circle. |Two points are marked on the circumference of a circle. A straight line is drawn from each of these points which both meet in the exact centre of the circle. Another straight line is drawn from each point and meet at a certain point on the circumference. How much bigger is the angle formed at the centre compared to the angle formed at the circumference?| Double. Angles at the circumference are half the size of the angle formed at the circle's centre when the angles are created from the same two points on the circumference. If the angle at the centre is (x) and the angle at the circumference is (y) then the equation for this rule would be - x=2y. |When a four-sided shape, where each corner touches the circumference is found inside a circle, it is called a cyclic quadrilateral. What is the rule about the size of the angles that are opposite to each other within this cyclic quadrilateral?| They total 180 degrees. Opposite angles within a cyclic quadrilateral will always total 180 degrees. In total, angles in a quadrilateral tally to make 360 degrees. That is the proof of the rule. As there are always two sets of opposing angles in a quadrilateral, each must total 180 degrees and subsequently reach the full 360 degrees. |A point is marked outside the confines of the circle and from that two straight lines are drawn so that they touch the opposite sides of the circles. What does this tell us about the straight lines from the point of origin to the point of meeting the circumference?| They are equal in length. When this phenomenon occurs, the length of the two straight lines from the point of origin to the points on the circumference of the circle that they touch will be equidistant. |An important factor to take into consideration when attempting to find angles using circle theorems is the use of isosceles triangles. Which of the following statements is true of isosceles triangles?| Both of these. The use of isosceles triangles when attempting to ascertain angles within a circle makes the process a whole lot easier. When one of the triangle's corners is situated directly on the circle's geometric centre, it is possible to know that two sides (the radii of the circle) will be equidistant. An isosceles triangle is indicated by a dash drawn over each of the two equal sides. Once the isosceles triangle is known, the missing angle can be established. As the angle between the point at where the two equidistant lines meet is always the different sized angle, you know that the other two angles will be equal. If angle x = 50 degrees, angle y = 50 degrees. That will leave a final angle of 80 degrees, as angles in a triangle always amount to 180 degrees. |One significant rule regarding the attainment of angle sizes within a circle is the 'alternate _______ theorem'.| Segment. The alternate segment theory proposes that two angles in a certain structure will be equal to each other. Take angles (x) and (y). Angle (x) is situated between a chord (a line connecting two points of an arc) and a tangent. Angle (y) is situated between the meeting points of two straight lines which have been drawn from either end of the chord. This meeting point is on the circumference. When angles (x) and (y) are found in this setup they are equal in size. The equation would be x=y. |We can't possibly do a math topic and not put in an equation, so here we have it! What is the equation for finding the area of a sector within a circle? (r = radius, A = angle, x = multiply, / = divide, pi = 3.14)| pi x r x r x (A/360). pi x r x r x (A/360), is much easier to say than it is to write down (pi times r squared times A divided by 360). Pi is the number which indicates the ratio between the length of the diameter and the length of the circumference. The reason that the angle (A) is divided by the number 360 is because there are 360 degrees in a full circle. Did you find these entries particularly interesting, or do you have comments / corrections to make? Let the author know! Send the author a thank you or Submit a correction
http://www.funtrivia.com/en/subtopics/Circle-Theorems-269014.html
13
68
The sun is one of over 100 billion stars in the Milky Way Galaxy. It is about 25,000 light-years from the center of the galaxy, and it revolves around the galactic center once about every 250 million years. One light-year, the distance that light travels in a vacuum in a year, equals about 5.88 trillion miles (9.46 trillion kilometers). Image credit: NASA/Transition Region & Coronal Explorer The sun's radius (distance from its center to its surface) is about 432,000 miles (695,500 kilometers), approximately 109 times Earth's radius. The following example may help you picture the relative sizes of the sun and Earth and the distance between them: Suppose the radius of Earth were the width of an ordinary paper clip. The radius of the sun would be roughly the height of a desk, and the sun would be about 100 paces from Earth. The part of the sun that we see has a temperature of about 5500 degrees C (10,000 degrees F). Astronomers measure star temperatures in a metric unit called the Kelvin (abbreviated K). One Kelvin equals exactly 1 Celsius degree (1.8 Fahrenheit degree), but the Kelvin and Celsius scales begin at different points. The Kelvin scale starts at absolute zero, which is -273.15 degrees C (- 459.67 degrees F). Thus, the temperature of the solar surface is about 5800 K. Temperatures in the sun's core reach over 15 million K. The sun is a star with a diameter of approximately 864,000 miles (1,390,000 kilometers), about 109 times the diameter of Earth. The largest stars have a diameter about 1,000 times that of the sun. Image credit: NASA/NSSDC The sun, like Earth, is magnetic. Scientists describe the magnetism of an object in terms of a magnetic field. This is a region that includes all the space occupied by the object and much of the surrounding space. Physicists define a magnetic field as the region in which a magnetic force could be detected -- as with a compass. Physicists describe how magnetic an object is in terms of field strength. This is a measure of the force that the field would exert on a magnetic object, such as a compass needle. The typical strength of the sun's field is only about twice that of Earth's field. But the sun's magnetic field becomes highly concentrated in small regions, with strengths up to 3,000 times as great as the typical strength. These regions shape solar matter to create a variety of features on the sun's surface and in its atmosphere, the part that we can see. These features range from relatively cool, dark structures known as sunspots to spectacular eruptions called flares and coronal mass ejections. Flares are the most violent eruptions in the solar system. Coronal mass ejections, though less violent than flares, involve a tremendous mass (amount of matter). A single ejection can spew approximately 20 billion tons (18 billion metric tons) of matter into space. A cube of lead 3/4 mile (1.2 kilometers) on a side would have about the same mass. The sun was born about 4.6 billion years ago. It has enough nuclear fuel to remain much as it is for another 5 billion years. Then it will grow to become a type of star called a red giant. Later in the sun's life, it will cast off its outer layers. The remaining core will collapse to become an object called a white dwarf, and will slowly fade. The sun will enter its final phase as a faint, cool object sometimes called a black dwarf. This article discusses Sun (Characteristics of the sun) (Zones of the sun) (Solar activity) (Evolution of the sun) (Studying the sun) (History of modern solar study). Characteristics of the sun Mass and density The sun has 99.8 percent of the mass in the solar system. The sun's mass is roughly 2 X 1027 tons. This number would be written out as a 2 followed by 27 zeros. The sun is 333,000 times as massive as Earth. The sun's average density is about 90 pounds per cubic foot (1.4 grams per cubic centimeter). This is about 1.4 times the density of water and less than one-third of Earth's average density. The sun, like most other stars, is made up mostly of atoms of the chemical element hydrogen. The second most plentiful element in the sun is helium, and almost all the remaining matter consists of atoms of seven other elements. For every 1 million atoms of hydrogen in the entire sun, there are 98,000 atoms of helium, 850 of oxygen, 360 of carbon, 120 of neon, 110 of nitrogen, 40 of magnesium, 35 of iron, and 35 of silicon. So about 94 percent of the atoms are hydrogen, and 0.1 percent are elements other than hydrogen and helium. But hydrogen is the lightest of all elements, and so it accounts for only about 72 percent of the mass. Helium makes up around 26 percent. The inside of the sun and most of its atmosphere consist of plasma. Plasma is basically a gas whose temperature has been raised to such a high level that it becomes sensitive to magnetism. Scientists sometimes emphasize the difference in behavior between plasma and other gas. They say that plasma is a fourth state of matter, alongside solid, liquid, and gas. But in general, scientists make the distinction between plasma and gas only when technically necessary. The essential difference between plasma and other gas is an effect of the temperature increase: This increase has made the gas atoms come apart. What is left -- the plasma -- consists of electrically charged atoms called ions and electrically charged particles called electrons that move about independently. An electrically neutral atom contains one or more electrons that act as though they form a shell or shells around its central region, its nucleus. Each electron carries a single unit of negative electric charge. Deep inside the atom is the nucleus, which has almost all the atom's mass. The simplest nucleus, that of the most common form of hydrogen, consists of a single particle known as a proton. A proton carries a single unit of positive electric charge. All other nuclei have one or more protons and one or more neutrons. A neutron carries no net charge, and so every nucleus is electrically positive. But a neutral atom has as many electrons as protons. The net electric charge of a neutral atom is therefore zero. An atom or molecule that comes apart by losing one or more electrons has a positive charge and is called an ion or, sometimes, a positive ion. Most of the atoms inside the sun are positive ions of the most common form of hydrogen. Thus, most of the sun consists of single protons and independent electrons. The sun is much larger than Earth. From the sun's center to its surface, it is about 109 times the radius of Earth. Some of the streams of gas rising from the solar surface are larger than Earth. Image credit: World Book illustration by Roberta Polfus How much of a gas is made up of single atoms and how much of molecules also depends upon its temperature. If the gas is relatively hot, the atoms will move about independently. But if the gas is relatively cool, its atoms may bond (combine chemically), creating molecules. Much of the sun's surface consists of a gas of single atoms. But sunspots are so cool that some of their atoms can bond to form molecules. The remainder of this article follows the general practice of scientists by referring to both plasma and other gas simply as gas. Most of the energy emitted (sent out) by the sun is visible light and a related form of radiation known as infrared rays, which we feel as heat. Visible light and infrared rays are two forms of electromagnetic radiation. The sun also emits particle radiation, made up mostly of protons and electrons. Electromagnetic radiation consists of electrical and magnetic energy. The radiation can be thought of as waves of energy or as particle-like "packets" of energy called photons. Visible light, infrared rays, and other forms of electromagnetic radiation differ in their energy. Six bands of energy span the entire spectrum (range) of electromagnetic energy. From the least energetic to the most energetic, they are: radio waves, infrared rays, visible light, ultraviolet rays, X rays, and gamma rays. Microwaves, which are high-energy radio waves, are sometimes considered to be a separate band. The sun emits radiation of each type in the spectrum. The amount of energy in electromagnetic waves is directly related to their wavelength, the distance between successive wave crests. The more energetic the radiation, the shorter the wavelength. For example, gamma rays have shorter wavelengths than radio waves. The energy in an individual photon is related to the position of the photon in the spectrum. For instance, a gamma ray photon has more energy than a photon of radio energy. All forms of electromagnetic radiation travel through space at the same speed, commonly known as the speed of light: 186,282 miles (299,792 kilometers) per second. At this rate, a photon emitted by the sun takes only about 8 minutes to reach Earth. The amount of electromagnetic radiation from the sun that reaches the top of Earth's atmosphere is known as the solar constant. This amount is about 1,370 watts per square meter. But only about 40 percent of the energy in this radiation reaches Earth's surface. The atmosphere blocks some of the visible and infrared radiation, almost all the ultraviolet rays, and all the X rays and gamma rays. But nearly all the radio energy reaches Earth's surface. Protons and electrons flow continually outward from the sun in all directions as the solar wind. These particles come close to Earth, but Earth's magnetic field prevents them from reaching the surface. However, more intense concentrations of particles from flares and coronal mass ejections on the sun reach Earth's atmosphere. These particles are known as solar cosmic rays. Most of them are protons, but they also include heavier nuclei as well as electrons. They are extremely energetic. As a result, they can be hazardous to astronauts in orbit or to orbiting satellites. The cosmic rays cannot reach Earth's surface. When they collide with atoms at the top of the atmosphere, they change into a shower of less energetic particles. But, because the solar events are so energetic, they can create geomagnetic storms, major disturbances in Earth's magnetic field. The storms, in turn, can disrupt electrical equipment on Earth's surface. For example, they can overload power lines, leading to blackouts. In the visible-light band of the electromagnetic spectrum are all the colors of the rainbow. Sunlight consists of all these colors. Most of the sun's radiation comes to us in the yellow-green part of the visible spectrum. However, sunlight is white. When the atmosphere acts as a filter for the setting sun, the sun may look yellow or orange. You can view the colors in sunlight by using a prism to separate and spread them out. Red light, which is produced by the radiation with the least energy per photon -- and the longest waves -- will be at one end of the spectrum. The red light will gradually shade into orange light, which, in turn, will shade into yellow light. Next to yellow will be green, and then will come blue. In some lists of the colors of the rainbow, indigo comes after blue. The last color will be violet, produced by the radiation with the most energy per photon -- and the shortest waves. Such color listings are not meant to indicate that sunlight has only six or seven colors. Each shading is itself a color. Nature produces many more colors than people have ever named. The sun makes a complete rotation in about a month. But because the sun is a gaseous body rather than a solid one, different parts of the sun rotate at different rates. Gas near the sun's equator takes about 25 days to rotate once, while gas at higher latitudes may take slightly more than 28 days. The sun's axis of rotation is tilted by a few degrees from the axis of Earth's orbit. Thus, either the sun's north geographic pole or its south geographic pole is usually visible from Earth. The sun vibrates like a bell that is continually struck. But the sun produces more than 10 million individual "tones" at the same time. The vibrations of the solar gas are mechanically similar to the vibrations of air -- also a gas -- that we know as sound waves. Astronomers therefore refer to the solar waves as sound waves, though the vibrations are much too slow for us to hear. The fastest solar vibrations have a period of about 2 minutes. A vibration's period is the amount of time taken for a complete cycle of vibration -- one back-and-forth movement of the vibrating object. The slowest vibration that a human being can hear has a period of about 1/20 of a second. Most of the sun's sound waves originate in convection cells -- large concentrations, or clumps, of gas beneath the surface. These cells carry energy to the surface by rising, just as water boiling in a pan rises to the surface. The word convection refers to the boiling motions of the cells. As the cells rise, they cool. They then fall back down to the level at which the upward motion started. As the cells fall, they vibrate violently. The vibrations cause sound waves to move out from the cells. Because the sun's atmosphere has so little mass, sound waves cannot travel through it. Therefore, when a wave reaches the surface, it turns back inward. As a result, a bit of the surface bobs up and down. As the wave travels inward, it begins to curve back toward the surface. The amount by which it curves depends on the density of the gas through which it travels and other factors. Eventually, the wave reaches the surface and turns inward again. It continues to travel until it loses all its energy to the surrounding gas. The waves that travel downward the greatest distance have the longest periods. Some of these waves approach the sun's core and have periods of several hours. Some of the time, the sun's magnetic field has a simple overall shape. At other times, the field is extremely complex. The simple field resembles the field that would be present if the sun's axis of rotation were a huge bar magnet. You can see the shape of a bar magnet's field by conducting an experiment with iron filings. Place a sheet of paper on a bar magnet and then sprinkle iron filings on the paper. The filings will form a pattern that reveals the shape of the magnetic field. Many of the filings will gather in D-shaped loops that connect the ends of the magnet. Physicists define the field in terms of imaginary lines that give rise to the loops of filings. These lines are called field lines, flux lines, or lines of force. Scientists assign these lines a direction, and the bar magnet is said to have a magnetic north pole at one end and a magnetic south pole at the other end. The field lines go out of the magnet from the north pole, loop around, and return to the magnet at the south pole. The cause of the sun's magnetic field is, in part, the movement of the convection cells. Any electrically charged object can create a magnetic field simply by moving. The convection cells, which are composed of positive ions and electrons, circulate in a way that helps create the solar field. When the sun's magnetic field becomes complex, field lines resemble a kinked, twisted garden hose. The field develops kinks and twists for two reasons: (1) The sun rotates more rapidly at the equator than at higher latitudes, and (2) the inner parts of the sun rotate more rapidly than the surface. The differences in rotational speed stretch field lines in an easterly direction. Eventually, the lines become so distorted that the kinks and twists develop. In some areas, the field is thousands of times stronger than the overall magnetic field. In these places, clusters of field lines break through the surface, creating loops in the solar atmosphere. At one end of the loop, the breakthrough point is a magnetic north pole. At this point, the direction of the field lines is upward -- that is, away from the interior. At the other end of the loop, the breakthrough point is a magnetic south pole, and the lines point downward. A sunspot forms at each point. The field lines guide ions and electrons into the space above the sunspots, producing gigantic loops of gas. The number of sunspots on the sun depends on the amount of distortion in the field. The change in this number, from a minimum to a maximum and back to a minimum, is known as the sunspot cycle. The average period of the sunspot cycle is about 11 years. At the end of a sunspot cycle, the magnetic field quickly reverses its polarity and loses most of its distortion. Suppose the sun's magnetic north pole and its geographic north pole were at the same place at the start of a given cycle. At the beginning of the next cycle, the magnetic north pole would be at the same place as the geographic south pole. A change of polarity from one orientation to the other and back again equals the periods of two successive sunspot cycles and is therefore about 22 years. Nuclear fusion can occur in the core of the sun because the core is tremendously hot and dense. Because nuclei have a positive charge, they tend to repel one another. But the core's temperature and density are high enough to force nuclei together. The most common fusion process in the sun is called the proton-proton chain. This process begins when nuclei of the simplest form of hydrogen -- single protons -- are forced together one at a time. First, a nucleus with two particles forms, then a nucleus with three particles, and finally a nucleus with four particles. The process also produces an electrically neutral particle called a neutrino. The final nucleus consists of two protons and two neutrons, a nucleus of the most common form of helium. The mass of this nucleus is slightly less than the mass of the four protons from which it forms. The lost mass is converted into energy. The amount of energy can be calculated from the German-born physicist Albert Einstein's famous equation E = mc-squared (E=mc2). In this equation, the symbol E represents the energy, m the mass that is covered, and c-squared (c2) the speed of light multiplied by itself. Comparison with other stars Fewer than 5 percent of the stars in the Milky Way are brighter or more massive than the sun. But some stars are more than 100,000 times as bright as the sun, and some have as much as 100 times the sun's mass. At the other extreme, some stars are less than 1/10,000 as bright as the sun, and a star can have as little as 7/100 of the sun's mass. There are hotter stars, which are much bluer than the sun; and cooler stars, which are much redder. The sun is a relatively young star, a member of a generation of stars known as Population I stars. An older generation of stars is called Population II. There may have existed an earlier generation, called Population III. However, no members of this generation are known. The remainder of this section refers to three generations of stars. The three generations differ in their content of chemical elements heavier than helium. First-generation stars have the lowest percentage of these elements, and second-generation stars have a higher percentage. The sun and other third-generation stars have the highest percentage of elements heavier than helium. The percentages differ in this way because first- and second-generation stars that "died" passed along their heavier elements. Many of these stars produced successively heavier elements by means of fusion in and near their cores. The heaviest elements were created when the most massive stars exploded as supernovae. Supernovae enrich the clouds of gas and dust from which other stars form. Other sources of enrichment are planetary nebulae, the cast-off outer layers of less massive stars. Zones of the sun The sun and its atmosphere consist of several zones or layers. From the inside out, the solar interior consists of the core, the radiative zone, and the convection zone. The solar atmosphere is made up of the photosphere, the chromosphere, a transition region, and the corona. Beyond the corona is the solar wind, which is actually an outward flow of coronal gas. Because astronomers cannot see inside the sun, they have learned about the solar interior indirectly. Part of their knowledge is based on the observed properties of the sun as a whole. Some of it is based on calculations that produce phenomena in the observable zones. The core extends from the center of the sun about one-fourth of the way to the surface. The core has about 2 percent of the sun's volume, but it contains almost half the sun's mass. Its maximum temperature is over 15 million Kelvins. Its density reaches 150 grams per cubic centimeter, nearly 15 times the density of lead. The high temperature and density of the core result in immense pressure, about 200 billion times Earth's atmospheric pressure at sea level. The core's pressure supports all the overlying gas, preventing the sun from collapsing. Almost all the fusion in the sun takes place in the core. Like the rest of the sun, the core's initial composition, by mass, was 72 percent hydrogen, 26 percent helium, and 2 percent heavier elements. Nuclear fusion has gradually changed the core's contents. Hydrogen now makes up about 35 percent of the mass in the center of the core and 65 percent at its outer boundary. Surrounding the core is a huge spherical shell known as the radiative zone. The outer boundary of this zone is 70 percent of the way to the solar surface. The radiative zone makes up 32 percent of the sun's volume and 48 percent of its mass. The radiative zone gets its name from the fact that energy travels through it mainly by radiation. Photons emerging from the core pass through stable layers of gas. But they scatter from the dense particles of gas so often that an individual photon may take 1,000,000 years to pass through the zone. At the bottom of the radiative zone, the density is 22 grams per cubic centimeter -- about twice that of lead -- and the temperature is 8 million K. At the top of the zone, the density is 0.2 gram per cubic centimeter, and the temperature is 2 million K. The composition of the radiative zone has remained much the same since the sun's birth. The percentages of the elements are nearly the same from the top of the radiative zone to the solar surface. The highest level of the solar interior, the convection zone, extends from the radiative zone to the sun's surface. This zone consists of the "boiling" convection cells. It makes up about 66 percent of the sun's volume but only slightly more than 2 percent of its mass. At the top of the zone, the density is near zero, and the temperature is about 5800 K. The convection cells "boil" to the surface because photons that spread outward from the radiative zone heat them. Astronomers have observed two main kinds of convection cells -- (1) granulation and (2) supergranulation. Granulation cells are about 600 miles (1,000 kilometers) across. Supergranulation cells reach a diameter of about 20,000 miles (30,000 kilometers). The lowest layer of the atmosphere is called the photosphere. This zone emits the light that we see. The photosphere is about 300 miles (500 kilometers) thick. But most of the light that we see comes from its lowest part, which is only about 100 miles (150 kilometers) thick. Astronomers often refer to this part as the sun's surface. At the bottom of the photosphere, the temperature is 6400 K, while it is 4400 K at the top. The photosphere consists of numerous granules, which are the tops of granulation cells. A typical granule exists for 15 to 20 minutes. The average density of the photosphere is less than one-millionth of a gram per cubic centimeter. This may seem to be an extremely low density, but there are tens of trillions to hundreds of trillions of individual particles in each cubic centimeter. The next zone up is the chromosphere. The main characteristic of this zone is a rise in temperature, which reaches about 10,000 K in some places and 20,000 K in others. Astronomers first detected the chromosphere's spectrum during total eclipses of the sun. The spectrum is visible after the moon covers the photosphere, but before it covers the chromosphere. This period lasts only a few seconds. The emission lines in the spectrum seem to flash suddenly into visibility, so the spectrum is known as the flash spectrum. The chromosphere is apparently made up entirely of spike-shaped structures called spicules (SPIHK yoolz). A typical spicule is about 600 miles (1,000 kilometers) across and up to 6,000 miles (10,000 kilometers) high. The density of the chromosphere is about 10 billion to 100 billion particles per cubic centimeter. The temperature of the chromosphere ranges to about 20,000 K, and the corona is hotter than 500,000 K. Between the two zones is a region of intermediate temperatures known as the chromosphere-corona transition region, or simply the transition region. The transition region receives much of its energy from the overlying corona. The region emits most of its light in the ultraviolet spectrum. The thickness of the transition region is a few hundred to a few thousand miles or kilometers. In some places, relatively cool spicules extend from the chromosphere high into the solar atmosphere. Nearby may be areas where thin, hot coronal structures reach down close to the photosphere. Corona is the part of the sun's atmosphere whose temperature is greater than 500,000 K. The corona consists of such structures as loops and streams of ionized gas. The structures connect vertically to the solar surface, and magnetic fields that emerge from inside the sun shape them. The temperature of a given structure varies along each field line. Near the surface, the temperature is typical of the photosphere. At higher levels, the temperature has chromospheric values, then values of the transition region, then coronal values. In the part of the corona nearest the solar surface, the temperature is about 1 million to 6 million K, and the density is about 100 million to 1 billion particles per cubic centimeter. The temperature reaches tens of millions of Kelvins when a flare occurs. The corona is so hot that it extends far into space and continually expands. The flow of coronal gas into space is known as the solar wind. At the distance of Earth from the sun, the density of the solar wind is about 10 to 100 particles per cubic centimeter. The solar wind extends far into interplanetary space as a large, teardrop-shaped cavity called the heliosphere. The sun and all the planets are inside the heliosphere. Far beyond the orbit of Pluto, the farthest planet, the heliosphere joins the interstellar medium, the dust and gas that occupy the space between the stars. The sun's magnetic fields rise through the convection zone and erupt through the photosphere into the chromosphere and corona. The eruptions lead to solar activity, which includes such phenomena as sunspots, flares, and coronal mass ejections. Areas where sunspots or eruptions occur are known as active regions. The amount of activity varies from a solar minimum at the beginning of a sunspot cycle to a solar maximum about 5 years later. The number of sunspots that exist at a given time varies. On the side of the solar disk that we see, this number ranges from none to approximately 250 individual sunspots and clusters of sunspots. Sunspots are dark, often roughly circular features on the solar surface. They form where denser bundles of magnetic field lines from the solar interior break through the surface. |› Return to Topics||› Back to Top|
http://mynasa.nasa.gov/worldbook/sun_worldbook.html
13
132
A vector field can be completely represented by means of three sets of charts, one of which shows the scalar field of the magnitude of the vector and two of which show the direction of the vector in horizontal and vertical planes. It can also be fully described by means of three sets of scalar fields representing the components of the vector along the principal coordinate axes (V. Bjerknes and different collaborators, 1911). In oceanography, one is concerned mainly with vectors that are horizontal, such as velocity of ocean currents—that is, two-dimensional vectors. Representation of a two-dimensional vector field by vectors of indicated direction and magnitude and by vector lines and equiscalar curves. Vector lines cannot intersect except at singular points or lines, where the magnitude of the vector is zero. Vector lines cannot begin or end within the vector field except at singular points, and vector lines are continuous. The simplest and most important singularities in a two-dimensional vector field are shown in fig. 96: These are (1) points of divergence (fig. 96A and C) or convergence (fig. 96B and D), at which an infinite number of vector lines meet; (2) neutral points, at which two or more vector lines intersect (the example in fig. 96E shows a neutral point of the first order in which two vector lines intersect—that is, a hyperbolic point); and (3) lines of divergence (fig. 96F) or convergence (fig. 96G), from which an infinite number of vector lines diverge asymptotically or to which an infinite number of vector lines converge asymptotically. It is not necessary to enter upon all the characteristics of vector fields or upon all the vector operations that can be performed, but two important vector operations must be mentioned. Singularities in a two-dimensional vector field. A and C, points of divergence; B and D, points of convergence; E, neutral point of first order (hyperbolic point); F, line of convergence; and G, line of divergence. Assume that a vector A has the components Ax, Ay, and Az. The scalar quantity The vector which has the components Two representations of a vector that varies in space and time will also be mentioned. A vector that has been observed at a given locality during a certain time interval can be represented by means of a central vector diagram (fig. 97). In this diagram, all vectors are plotted from the same point, and the time of observation is indicated at each vector. Occasionally the end points of the vector are joined by a curve on which the time of observation is indicated and the vectors themselves are omitted. This form of representation is commonly used when dealing with periodic currents such as tidal currents. A central vector diagram is also used extensively in pilot charts to indicate the frequency of winds from given directions. In this case the direction of the wind is shown by an arrow, and the frequency of wind from that direction is shown by the length of the arrow. Time variation of a vector represented by a central vector diagram (left) and a progressive vector diagram (right). If it can be assumed that the observations were made in a uniform vector field, a progressive vector diagram is useful. This diagram is constructed by plotting the second vector from the end point of the first, and so on (fig. 97). When dealing with velocity, one can compute the displacement due to the average velocity over a short interval of time. When these displacements are plotted in a progressive vector diagram, the resulting curve will show the trajectory of a particle if the velocity field is of such uniformity that the observed velocity can be considered representative of the velocities in the neighborhood of the place of observation. The vector that can be drawn from the beginning of the first vector to the end of the last shows the total displacement in the entire time interval, and this displacement, divided by the time interval, is the average velocity for the period. The Field of Motion and the Equation of Continuity The Field of Motion. Among vector fields the field of motion is of special importance. Several of the characteristics of the field of motion can be dealt with without considering the forces which have brought about or which maintain the motion, and these characteristics form the subject of kinematics. The velocity of a particle relative to a given coordinate system is defined as ν = dr/dt, where dr is an element of length in the direction in which the particle moves. In a rectangular coordinate system the velocity has the components The velocity field can be completely described by the Lagrange or by the Euler method. In the Lagrange method the coordinates of all moving particles are represented as functions of time and of a threefold multitude of parameters that together characterize all the moving particles. From this representation the velocity of each particle, and, thus, the velocity field, can be derived at any time. The more convenient method by Euler will be employed in the following. This method assumes that the velocity of all particles of the fluid has been defined. On this assumption the velocity field is completely described if the components of the velocity can be represented as functions of the coordinates and of time: The characteristic difference between the two methods is that Lagrange's method focuses attention on the paths taken by all individual particles, whereas Euler's method focuses attention on the velocity at each point in the coordinate space. In Euler's method it is necessary, however, to consider the motion of the individual particles in order to find the acceleration. After a time dt, a particle that, at the time t, was at the point (x,y,z) and had the velocity components fx(x,y,z,t), and so on, will be at the point (x + dx, y + dy, z + dz), and will have the velocity components fx(x + dx, y+ dy, z + dz, t + dt), and so on. Expanding in Taylor's series, one obtains Thus, one has to deal with two time derivatives: the individual time derivative, which represents the acceleration of the individual particles, and the local time derivative, which represents the time change of the velocity at a point in space and is called the local acceleration. The last terms in equation (XII, 17) are often combined and called the field acceleration. The above development is applicable not only when considering the velocity field, but also when considering any field of a property that varies in space and time (p. 157). The velocity field is stationary when the local time changes are zero: The Equation of Continuity. Consider a cube of volume dxdydz. The mass of water that in unit time flows in parallel to the x axis is equal to pvxdydz, and the mass that flows out is equal to Now:equation (XII, 17), therefore, The equation of continuity is not valid in the above form at a boundary surface because no out- or inflow can take place there. In a direction normal to a boundary a particle in that surface must move at the same velocity as the surface itself. If the surface is rigid, no component normal to the surface exists and the velocity must be directed parallel to the surface. The condition Application of the Equation of Continuity. At the sea surface the kinematic boundary condition must be fulfilled. Designating the vertical displacement of the sea surface relative to a certain level of equilibrium by η, and taking this distance positive downward, because the positive z axis is directed downward, one obtains With stationary distribution of mass (∂ρ/∂t = 0) the equation of continuity is reduced to The total transport of mass through a vertical surface of unit width reaching from the surface to the bottom has the componentsequation (XII, 23) by dz and integrating from the surface to the bottom, one obtains When dealing with conditions near the surface, one can consider the density as constant and can introduce average values of the velocity components [Equation] and [Equation] within a top layer of thickness H. With these simplifications, one obtains, putting νz,0 = 0,equation (XII, 25) states that at a small distance below the surface ascending motion is encountered if the surface currents are diverging, and descending if the surface currents are converging. This is an obvious conclusion, because, with diverging surface currents, water is carried away from the area of divergence and must be replaced by water that rises from some depth below the surface, and vice versa. Thus, conclusions as to vertical motion can be based on charts showing the surface currents. For this purpose, it is of advantage to write the divergence of a two-dimensional vector field in a different form: The equation of continuity is applicable not only to the field of mass but also to the field of a dissolved substance that is not influenced by biological activity. Let the mass of the substance per unit mass of water be s. Multiplying the equation of continuity by s and integrating from the surface to bottom, one obtains, if the vertical velocity at the surface is zero, These equations have already been used in simplified form in order to compute the relation between inflow and outflow of basins (p. 147). Other simplifications have been introduced by Knudsen, Witting, and Gehrke (Krümmel, 1911, p. 509–512). Trajectories (full drawn lines) and stream lines (dashed lines) in a progressive surface wave. Stream Lines and Trajectories. The vector lines showing the direction of currents at a given time are called the stream lines, or the lines of flow. The paths followed by the moving water particles, on the other hand, are called the trajectories of the particles. Stream lines and trajectories are identical only when the motion is stationary, in which case the stream, lines of the velocity field remain unaltered in time, and a particle remains on the same stream line. The general difference between stream lines and trajectories can be illustrated by considering the type of motion in a traveling surface wave. The solid lines with arrows in fig. 98 show the stream lines in a cross section of a surface wave that is supposed to move from left to right, passing the point A. When the crest of the wave passes A, the motion of the water particles at A is in the direction of progress, but with decreasing It is supposed that the speed at which the wave travels is much greater than the velocity of the single water particles that take part in the wave motion. On this assumption a water particle that originally was located below A will never be much removed from this vertical and will return after one wave period to its original position. The trajectories of such particles in this case are circles, the diameters of which decrease with increasing distance from the surface, as shown in the figure. It is evident that the trajectories bear no similarity to the stream lines. Representations of the Field of Motion in the Sea Trajectories of the surface water masses of the ocean can be determined by following the drift of floating bodies that are carried by the currents. It is necessary, however, to exercise considerable care when interpreting the available information about drift of bodies, because often the wind has carried the body through the water. Furthermore, in most cases, only the end points of the trajectory are known—that is, the localities where the drift commenced and ended. Results of drift-bottle experiments present an example of incomplete information as to trajectories. As a rule, drift bottles are recovered on beaches, and a reconstruction of the paths taken by the bottles from the places at which they were released may be very hypothetical. The reconstruction may be aided by additional information in the form of knowledge of distribution of surface temperatures and salinities that are related to the currents, or by information obtained from drift bottles that have been picked up at sea. Systematic drift-bottle experiments have been conducted, especially in coastal areas that are of importance to fisheries. Stream lines of the actual surface or subsurface currents must be based upon a very large number of direct current measurements. Where the velocity is not stationary, simultaneous observations are required. Direct measurements of subsurface currents must be made from anchored vessels, but this procedure is so difficult that no simultaneous measurements that can be used for preparing charts of observed subsurface currents for any area are available. Numerous observations of surface currents, on the other hand, have been derived from ships' logs. Assume that the position of the vessel at Determination of surface currents by difference between positions by fixes and dead reckoning. The data on surface currents obtained from ships' logs cannot be used for construction of a synoptic chart of the currents, because the number of simultaneous observations is far too small. Data for months, quarter years, or seasons have been compiled, however, from many years' observations, although even these are unsatisfactory for presentation of the average conditions because such data are not evenly distributed over large areas but are concentrated along trade routes. In some charts the average direction in different localities is indicated by arrows, and where strong currents prevail the average speed in nautical miles per day is shown by a number. In other charts the surface flow is represented by direction roses in which the number at the center of the rose represents the percentage of no current, the lengths of the different arrows represent the percentage of currents in the direction of the arrows, and the figures at the ends of the arrows represent the average velocity in miles per day of currents in the indicated direction. These charts contain either averages for the year or for groups of months. On the basis of such charts, average surface currents during seasons or months have in some areas been represented by means of stream lines and equiscalar curves of velocity. The principle advantage of this representation is that it permits a rapid survey of the major features and that it brings out the singularities of the stream lines, although in many instances the interpretation of the data is uncertain and the details of the chart will depend greatly upon personal judgment. In drawing these stream lines it is necessary to follow the rules concerning vector lines (p. 419). The stream lines cannot intersect, but an infinite number of stream lines can meet in a point of convergence or divergence or can approach asymptotically a line of convergence or diverge asymptotically from a line of divergence. Stream lines of the surface currents off southeastern Africa in July (after Wilimzik). As an example, stream lines of the surface flow in July off southeast Africa and to the south and southeast of Madagascar are shown in fig. 100. The figure is based on a chart by Willimzik (1929), but a number of the stream lines in the original chart have been omitted for the sake of simplification. In the chart a number of the characteristic singularities of a vector field are shown. Three hyperbolic points marked A appear, four points of convergence marked B are seen, and a number of lines of convergence marked C and lines of divergence marked D are present. The stream lines do not everywhere run parallel to the coast, and the representation involves the assumption of vertical motion at the coast, where the horizontal velocity, however, must vanish. The most conspicuous feature is the continuous line of convergence that to the southwest of Madagascar curves south and then runs west, following lat. 35°S. At this line of convergence, the Subtropical Convergence, which can be traced across the entire Indian Ocean and has its counterpart in other oceans, descending motion must take place. Similarly, descending motion must be present at the other lines of convergence, at the points of convergence, and at the east coast of Madagascar, whereas ascending motion must be present along the lines of divergence and along the west coast of Madagascar, where the surface waters flow away from the coast. Velocity curves have been omitted, for which reason the conclusions as to vertical motion remain incomplete (see p. 425). Near the coasts, eddies or countercurrents are indicated, and these phenomena often represent characteristic features of the flow and remain unaltered during long periods. As has already been stated, representations of surface flow by means of stream lines have been prepared in a few cases only. As a rule, the surface currents are shown by means of arrows. In some instances the representation is based on ships' observation of currents, but in other cases the surface flow has been derived from observed distribution of temperature and salinity, perhaps taking results of drift-bottle experiments into account. The velocity of the currents may not be indicated or may be shown by added numerals, or by the thickness of the arrows. No uniform system has been adopted (see Defant, 1929), because the available data are of such different kinds that in each individual case a form of representation must be selected which presents the available information in the most satisfactory manner. Other examples of surface flow will be given in the section dealing with the currents in specific areas.
http://publishing.cdlib.org/ucpressebooks/view?docId=kt167nb66r&doc.view=content&chunk.id=d2_2_ch12&toc.depth=1&anchor.id=0&brand=eschol
13
87
|Analysis index||History Topics Index| The main ideas which underpin the calculus developed over a very long period of time indeed. The first steps were taken by Greek mathematicians. To the Greeks numbers were ratios of integers so the number line had "holes" in it. They got round this difficulty by using lengths, areas and volumes in addition to numbers for, to the Greeks, not all lengths were numbers. Zeno of Elea, about 450 BC, gave a number of problems which were based on the infinite. For example he argued that motion is impossible:- If a body moves from A to B then before it reaches B it passes through the mid-point, say B1 of AB. Now to move to B1 it must first reach the mid-point B2 of AB1 . Continue this argument to see that A must move through an infinite number of distances and so cannot move. Leucippus, Democritus and Antiphon all made contributions to the Greek method of exhaustion which was put on a scientific basis by Eudoxus about 370 BC. The method of exhaustion is so called because one thinks of the areas measured expanding so that they account for more and more of the required area. However Archimedes, around 225 BC, made one of the most significant of the Greek contributions. His first important advance was to show that the area of a segment of a parabola is 4/3 the area of a triangle with the same base and vertex and 2/3 of the area of the circumscribed parallelogram. Archimedes constructed an infinite sequence of triangles starting with one of area A and continually adding further triangles between the existing ones and the parabola to get areas A, A + A/4 , A + A/4 + A/16 , A + A/4 + A/16 + A/64 , ... The area of the segment of the parabola is therefore A(1 + 1/4 + 1/42 + 1/43 + ....) = (4/3)A. This is the first known example of the summation of an infinite series. Archimedes used the method of exhaustion to find an approximation to the area of a circle. This, of course, is an early example of integration which led to approximate values of π. Here is Archimedes' diagram Among other 'integrations' by Archimedes were the volume and surface area of a sphere, the volume and area of a cone, the surface area of an ellipse, the volume of any segment of a paraboloid of revolution and a segment of an hyperboloid of revolution. No further progress was made until the 16th Century when mechanics began to drive mathematicians to examine problems such as centres of gravity. Luca Valerio (1552-1618) published De quadratura parabolae in Rome (1606) which continued the Greek methods of attacking these type of area problems. Kepler, in his work on planetary motion, had to find the area of sectors of an ellipse. His method consisted of thinking of areas as sums of lines, another crude form of integration, but Kepler had little time for Greek rigour and was rather lucky to obtain the correct answer after making two cancelling errors in this work. Three mathematicians, born within three years of each other, were the next to make major contributions. They were Fermat, Roberval and Cavalieri. Cavalieri was led to his 'method of indivisibles' by Kepler's attempts at integration. He was not rigorous in his approach and it is hard to see clearly how he thought about his method. It appears that Cavalieri thought of an area as being made up of components which were lines and then summed his infinite number of 'indivisibles'. He showed, using these methods, that the integral of xn from 0 to a was an+1/(n + 1) by showing the result for a number of values of n and inferring the general result. Roberval considered problems of the same type but was much more rigorous than Cavalieri. Roberval looked at the area between a curve and a line as being made up of an infinite number of infinitely narrow rectangular strips. He applied this to the integral of xm from 0 to 1 which he showed had approximate value (0m + 1m + 2m + ... + (n-1)m)/nm+1. Roberval then asserted that this tended to 1/(m + 1) as n tends to infinity, so calculating the area. Fermat was also more rigorous in his approach but gave no proofs. He generalised the parabola and hyperbola:- Parabola: y/a = (x/b)2 to (y/a)n = (x/b)m Hyperbola: y/a = b/x to (y/a)n = (b/x)m. In the course of examining y/a = (x/b)p, Fermat computed the sum of rp from r = 1 to r = n. Fermat also investigated maxima and minima by considering when the tangent to the curve was parallel to the x-axis. He wrote to Descartes giving the method essentially as used today, namely finding maxima and minima by calculating when the derivative of the function was 0. In fact, because of this work, Lagrange stated clearly that he considers Fermat to be the inventor of the calculus. Descartes produced an important method of determining normals in La Géométrie in 1637 based on double intersection. De Beaune extended his methods and applied it to tangents where double intersection translates into double roots. Hudde discovered a simpler method, known as Hudde's Rule, which basically involves the derivative. Descartes' method and Hudde's Rule were important in influencing Newton. Huygens was critical of Cavalieri's proofs saying that what one needs is a proof which at least convinces one that a rigorous proof could be constructed. Huygens was a major influence on Leibniz and so played a considerable part in producing a more satisfactory approach to the calculus. The next major step was provided by Torricelli and Barrow. Barrow gave a method of tangents to a curve where the tangent is given as the limit of a chord as the points approach each other known as Barrow's differential triangle. Here is Barrow's differential triangle Both Torricelli and Barrow considered the problem of motion with variable speed. The derivative of the distance is velocity and the inverse operation takes one from the velocity to the distance. Hence an awareness of the inverse of differentiation began to evolve naturally and the idea that integral and derivative were inverses to each other were familiar to Barrow. In fact, although Barrow never explicitly stated the fundamental theorem of the calculus, he was working towards the result and Newton was to continue with this direction and state the Fundamental Theorem of the Calculus explicitly. Torricelli's work was continued in Italy by Mengoli and Angeli. Newton wrote a tract on fluxions in October 1666. This was a work which was not published at the time but seen by many mathematicians and had a major influence on the direction the calculus was to take. Newton thought of a particle tracing out a curve with two moving lines which were the coordinates. The horizontal velocity x' and the vertical velocity y' were the fluxions of x and y associated with the flux of time. The fluents or flowing quantities were x and y themselves. With this fluxion notation y'/x' was the tangent to f(x, y) = 0. In his 1666 tract Newton discusses the converse problem, given the relationship between x and y'/x' find y. Hence the slope of the tangent was given for each x and when y'/x' = f(x) then Newton solves the problem by antidifferentiation. He also calculated areas by antidifferentiation and this work contains the first clear statement of the Fundamental Theorem of the Calculus. Newton had problems publishing his mathematical work. Barrow was in some way to blame for this since the publisher of Barrow's work had gone bankrupt and publishers were, after this, wary of publishing mathematical works! Newton's work on Analysis with infinite series was written in 1669 and circulated in manuscript. It was not published until 1711. Similarly his Method of fluxions and infinite series was written in 1671 and published in English translation in 1736. The Latin original was not published until much later. In these two works Newton calculated the series expansion for sin x and cos x and the expansion for what was actually the exponential function, although this function was not established until Euler introduced the present notation ex. You can see the series expansions for sine and for Taylor or Maclaurin series. Newton's next mathematical work was Tractatus de Quadratura Curvarum which he wrote in 1693 but it was not published until 1704 when he published it as an Appendix to his Optiks. This work contains another approach which involves taking limits. Newton says In the time in which x by flowing becomes x+o, the quantity xn becomes (x+o)n i.e. by the method of infinite series, xn + noxn-1 + (nn-n)/2 ooxn-2 + . . . At the end he lets the increment o vanish by 'taking limits'. Leibniz learnt much on a European tour which led him to meet Huygens in Paris in 1672. He also met Hooke and Boyle in London in 1673 where he bought several mathematics books, including Barrow's works. Leibniz was to have a lengthy correspondence with Barrow. On returning to Paris Leibniz did some very fine work on the calculus, thinking of the foundations very differently from Newton. Newton considered variables changing with time. Leibniz thought of variables x, y as ranging over sequences of infinitely close values. He introduced dx and dy as differences between successive values of these sequences. Leibniz knew that dy/dx gives the tangent but he did not use it as a defining property. For Newton integration consisted of finding fluents for a given fluxion so the fact that integration and differentiation were inverses was implied. Leibniz used integration as a sum, in a rather similar way to Cavalieri. He was also happy to use 'infinitesimals' dx and dy where Newton used x' and y' which were finite velocities. Of course neither Leibniz nor Newton thought in terms of functions, however, but both always thought in terms of graphs. For Newton the calculus was geometrical while Leibniz took it towards analysis. Leibniz was very conscious that finding a good notation was of fundamental importance and thought a lot about it. Newton, on the other hand, wrote more for himself and, as a consequence, tended to use whatever notation he thought of on the day. Leibniz's notation of d and ∫ highlighted the operator aspect which proved important in later developments. By 1675 Leibniz had settled on the notation ∫ y dy = y2/2 written exactly as it would be today. His results on the integral calculus were published in 1684 and 1686 under the name 'calculus summatorius', the name integral calculus was suggested by Jacob Bernoulli in 1690. After Newton and Leibniz the development of the calculus was continued by Jacob Bernoulli and Johann Bernoulli. However when Berkeley published his Analyst in 1734 attacking the lack of rigour in the calculus and disputing the logic on which it was based much effort was made to tighten the reasoning. Maclaurin attempted to put the calculus on a rigorous geometrical basis but the really satisfactory basis for the calculus had to wait for the work of Cauchy in the 19th Century. References (28 books/articles) Other Web sites: Article by: J J O'Connor and E F Robertson |History Topics Index||Analysis index| |Main index||Biographies Index |Famous curves index||Birthplace Maps |Mathematicians of the day||Anniversaries for the year |Search Form|| Societies, honours, etc The URL of this page is:
http://turnbull.mcs.st-and.ac.uk/~history/HistTopics/The_rise_of_calculus.html
13
62
In previous sections of this book, we have shown how the fossil record invalidates the theory of evolution. In point of fact, there was no need for us to relate any of that, because the theory of evolution collapses long before one gets to any claims about the evidence of fossils. The subject that renders the theory meaningless from the very outset is the question of how life first appeared on earth. When it addresses this question, evolutionary theory claims that life started with a cell that formed by chance. According to this scenario, four billion years ago various chemical compounds underwent a reaction in the primordial atmosphere on the earth in which the effects of thunderbolts and atmospheric pressure led to the formation of the first living cell. The first thing that must be said is that the claim that nonliving materials can come together to form life is an unscientific one that has not been verified by any experiment or observation. Life is only generated from life. Each living cell is formed by the replication of another cell. No one in the world has ever succeeded in forming a living cell by bringing inanimate materials together, not even in the most advanced laboratories. The theory of evolution claims that a living cell-which cannot be produced even when all the power of the human intellect, knowledge and technology are brought to bear-nevertheless managed to form by chance under primordial conditions on the earth. In the following pages, we will examine why this claim is contrary to the most basic principles of science and reason. An Example of the Logic of "Chance" If one believes that a living cell can come into existence by chance, then there is nothing to prevent one from believing a similar story that we will relate below. It is the story of a town. One day, a lump of clay, pressed between the rocks in a barren land, becomes wet after it rains. The wet clay dries and hardens when the sun rises, and takes on a stiff, resistant form. Afterwards, these rocks, which also served as a mould, are somehow smashed into pieces, and then a neat, well shaped, and strong brick appears. This brick waits under the same natural conditions for years for a similar brick to be formed. This goes on until hundreds and thousands of the same bricks have been formed in the same place. However, by chance, none of the bricks that were previously formed are damaged. Although exposed to storms, rain, wind, scorching sun, and freezing cold for thousands of years, the bricks do not crack, break up, or get dragged away, but wait there in the same place with the same determination for other bricks to form. When the number of bricks is adequate, they erect a building by being arranged sideways and on top of each other, having been randomly dragged along by the effects of natural conditions such as winds, storms, or tornadoes. Meanwhile, materials such as cement or soil mixtures form under "natural conditions," with perfect timing, and creep between the bricks to clamp them to each other. While all this is happening, iron ore under the ground is shaped under "natural conditions" and lays the foundations of a building that is to be formed with these bricks. At the end of this process, a complete building rises with all its materials, carpentry, and installations intact. Of course, a building does not only consist of foundations, bricks, and cement. How, then, are the other missing materials to be obtained? The answer is simple: all kinds of materials that are needed for the construction of the building exist in the earth on which it is erected. Silicon for the glass, copper for the electric cables, iron for the columns, beams, water pipes, etc. all exist under the ground in abundant quantities. It takes only the skill of "natural conditions" to shape and place these materials inside the building. All the installations, carpentry, and accessories are placed among the bricks with the help of the blowing wind, rain, and earthquakes. Everything has gone so well that the bricks are arranged so as to leave the necessary window spaces as if they knew that something called glass would be formed later on by natural conditions. Moreover, they have not forgotten to leave some space to allow the installation of water, electricity and heating systems, which are also later to be formed by chance. Everything has gone so well that "coincidences" and "natural conditions" produce a perfect design. If you have managed to sustain your belief in this story so far, then you should have no trouble surmising how the town's other buildings, plants, highways, sidewalks, substructures, communications, and transportation systems came about. If you possess technical knowledge and are fairly conversant with the subject, you can even write an extremely "scientific" book of a few volumes stating your theories about "the evolutionary process of a sewage system and its uniformity with the present structures." You may well be honored with academic awards for your clever studies, and may consider yourself a genius, shedding light on the nature of humanity. The theory of evolution, which claims that life came into existence by chance, is no less absurd than our story, for, with all its operational systems, and systems of communication, transportation and management, a cell is no less complex than a city. In his book Evolution: A Theory in Crisis, the molecular biologist Michael Denton discusses the complex structure of the cell: To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity... Is it really credible that random processes could have constructed a reality, the smallest element of which-a functional protein or gene-is complex beyond our own creative capacities, a reality which is the very antithesis of chance, which excels in every sense anything produced by the intelligence of man?237 The Complex Structure and Systems in the Cell The complex structure of the living cell was unknown in Darwin's day and at the time, ascribing life to "coincidences and natural conditions" was thought by evolutionists to be convincing enough. Darwin had proposed that the first cell could easily have formed "in some warm little pond."238 One of Darwin's supporters, the German biologist Ernst Haeckel, examined under the microscope a mixture of mud removed from the sea bed by a research ship and claimed that this was a nonliving substance that turned into a living one. This so-called "mud that comes to life," known as Bathybius haeckelii ("Haeckel's mud from the depths"), is an indication of just how simple a thing life was thought to be by the founders of the theory of evolution. The technology of the twentieth century has delved into the tiniest particles of life, and has revealed that the cell is the most complex system mankind has ever confronted. Today we know that the cell contains power stations producing the energy to be used by the cell, factories manufacturing the enzymes and hormones essential for life, a databank where all the necessary information about all products to be produced is recorded, complex transportation systems and pipelines for carrying raw materials and products from one place to another, advanced laboratories and refineries for breaking down external raw materials into their useable parts, and specialized cell membrane proteins to control the incoming and outgoing materials. And these constitute only a small part of this incredibly complex system. W. H. Thorpe, an evolutionist scientist, acknowledges that "The most elementary type of cell constitutes a 'mechanism' unimaginably more complex than any machine yet thought up, let alone constructed, by man."239 A cell is so complex that even the high level of technology attained today cannot produce one. No effort to create an artificial cell has ever met with success. Indeed, all attempts to do so have been abandoned. The theory of evolution claims that this system-which mankind, with all the intelligence, knowledge and technology at its disposal, cannot succeed in reproducing-came into existence "by chance" under the conditions of the primordial earth. Actually, the probability of forming a cell by chance is about the same as that of producing a perfect copy of a book following an explosion in a printing house. The English mathematician and astronomer Sir Fred Hoyle made a similar comparison in an interview published in Nature magazine on November 12, 1981. Although an evolutionist himself, Hoyle stated that the chance that higher life forms might have emerged in this way is comparable to the chance that a tornado sweeping through a junk-yard might assemble a Boeing 747 from the materials therein.240 This means that it is not possible for the cell to have come into being by chance, and therefore it must definitely have been "created." One of the basic reasons why the theory of evolution cannot explain how the cell came into existence is the "irreducible complexity" in it. A living cell maintains itself with the harmonious co-operation of many organelles. If only one of these organelles fails to function, the cell cannot remain alive. The cell does not have the chance to wait for unconscious mechanisms like natural selection or mutation to permit it to develop. Thus, the first cell on earth was necessarily a complete cell possessing all the required organelles and functions, and this definitely means that this cell had to have been created. The Problem of the Origin of Proteins So much for the cell, but evolution fails even to account for the building-blocks of a cell. The formation, under natural conditions, of just one single protein out of the thousands of complex protein molecules making up the cell is impossible. Proteins are giant molecules consisting of smaller units called amino acids that are arranged in a particular sequence in certain quantities and structures. These units constitute the building blocks of a living protein. The simplest protein is composed of 50 amino acids, but there are some that contain thousands. The crucial point is this. The absence, addition, or replacement of a single amino acid in the structure of a protein causes the protein to become a useless molecular heap. Every amino acid has to be in the right place and in the right order. The theory of evolution, which claims that life emerged as a result of chance, is quite helpless in the face of this order, since it is too wondrous to be explained by coincidence. (Furthermore, the theory cannot even substantiate the claim of the accidental formation of amino acids, as will be discussed later.) The fact that it is quite impossible for the functional structure of proteins to come about by chance can easily be observed even by simple probability calculations that anybody can understand. For instance, an average-sized protein molecule composed of 288 amino acids, and contains twelve different types of amino acids can be arranged in 10300 different ways. (This is an astronomically huge number, consisting of 1 followed by 300 zeros.) Of all of these possible sequences, only one forms the desired protein molecule. The rest of them are amino-acid chains that are either totally useless, or else potentially harmful to living things. In other words, the probability of the formation of only one protein molecule is "1 in 10300." The probability of this "1" actually occurring is practically nil. (In practice, probabilities smaller than 1 over 1050 are thought of as "zero probability"). Furthermore, a protein molecule of 288 amino acids is a rather modest one compared with some giant protein molecules consisting of thousands of amino acids. When we apply similar probability calculations to these giant protein molecules, we see that even the word "impossible" is insufficient to describe the true situation. When we proceed one step further in the evolutionary scheme of life, we observe that one single protein means nothing by itself. One of the smallest bacteria ever discovered, Mycoplasma hominis H39, contains 600 types of proteins. In this case, we would have to repeat the probability calculations we have made above for one protein for each of these 600 different types of proteins. The result beggars even the concept of impossibility. Some people reading these lines who have so far accepted the theory of evolution as a scientific explanation may suspect that these numbers are exaggerated and do not reflect the true facts. That is not the case: these are definite and concrete facts. No evolutionist can object to these numbers. This situation is in fact acknowledged by many evolutionists. For example, Harold F. Blum, a prominent evolutionist scientist, states that "The spontaneous formation of a polypeptide of the size of the smallest known proteins seems beyond all probability."241 Evolutionists claim that molecular evolution took place over a very long period of time and that this made the impossible possible. Nevertheless, no matter how long the given period may be, it is not possible for amino acids to form proteins by chance. William Stokes, an American geologist, admits this fact in his book Essentials of Earth History, writing that the probability is so small "that it would not occur during billions of years on billions of planets, each covered by a blanket of concentrated watery solution of the necessary amino acids."242 So what does all this mean? Perry Reeves, a professor of chemistry, answers the question: When one examines the vast number of possible structures that could result from a simple random combination of amino acids in an evaporating primordial pond, it is mind-boggling to believe that life could have originated in this way. It is more plausible that a Great Builder with a master plan would be required for such a task.243 If the coincidental formation of even one of these proteins is impossible, it is billions of times "more impossible" for some one million of those proteins to come together by chance and make up a complete human cell. What is more, by no means does a cell consist of a mere heap of proteins. In addition to the proteins, a cell also includes nucleic acids, carbohydrates, lipids, vitamins, and many other chemicals such as electrolytes arranged in a specific proportion, equilibrium, and design in terms of both structure and function. Each of these elements functions as a building block or co-molecule in various organelles. Robert Shapiro, a professor of chemistry at New York University and a DNA expert, calculated the probability of the coincidental formation of the 2000 types of proteins found in a single bacterium (There are 200,000 different types of proteins in a human cell.) The number that was found was 1 over 1040000.244 (This is an incredible number obtained by putting 40,000 zeros after the 1) A professor of applied mathematics and astronomy from University College Cardiff, Wales, Chandra Wickramasinghe, comments: The likelihood of the spontaneous formation of life from inanimate matter is one to a number with 40,000 noughts after it... It is big enough to bury Darwin and the whole theory of evolution. There was no primeval soup, neither on this planet nor on any other, and if the beginnings of life were not random, they must therefore have been the product of purposeful intelligence.245 Sir Fred Hoyle comments on these implausible numbers: Indeed, such a theory (that life was assembled by an intelligence) is so obvious that one wonders why it is not widely accepted as being self-evident. The reasons are psychological rather than scientific.246 An article published in the January 1999 issue of Science News revealed that no explanation had yet been found for how amino acids could turn into proteins: ….no one has ever satisfactorily explained how the widely distributed ingredients linked up into proteins. Presumed conditions of primordial Earth would have driven the amino acids toward lonely isolation.247 Let us now examine in detail why the evolutionist scenario regarding the formation of proteins is impossible. Even the correct sequence of the right amino acids is still not enough for the formation of a functional protein molecule. In addition to these requirements, each of the 20 different types of amino acids present in the composition of proteins must be left-handed. There are two different types of amino acids-as of all organic molecules-called "left-handed" and "right-handed." The difference between them is the mirror-symmetry between their three dimensional structures, which is similar to that of a person's right and left hands. Amino acids of either of these two types can easily bond with one another. But one astonishing fact that has been revealed by research is that all the proteins in plants and animals on this planet, from the simplest organism to the most complex, are made up of left-handed amino acids. If even a single right-handed amino acid gets attached to the structure of a protein, the protein is rendered useless. In a series of experiments, surprisingly, bacteria that were exposed to right-handed amino acids immediately destroyed them. In some cases, they produced usable left-handed amino acids from the fractured components. Let us for an instant suppose that life came about by chance as evolutionists claim it did. In this case, the right- and left-handed amino acids that were generated by chance should be present in roughly equal proportions in nature. Therefore, all living things should have both right- and left-handed amino acids in their constitution, because chemically it is possible for amino acids of both types to combine with each other. However, as we know, in the real world the proteins existing in all living organisms are made up only of left-handed amino acids. The question of how proteins can pick out only the left-handed ones from among all amino acids, and how not even a single right-handed amino acid gets involved in the life process, is a problem that still baffles evolutionists. Such a specific and conscious selection constitutes one of the greatest impasses facing the theory of evolution. Moreover, this characteristic of proteins makes the problem facing evolutionists with respect to "chance" even worse. In order for a "meaningful" protein to be generated, it is not enough for the amino acids to be present in a particular number and sequence, and to be combined together in the right three-dimensional design. Additionally, all these amino acids have to be left-handed: not even one of them can be right-handed. Yet there is no natural selection mechanism which can identify that a right-handed amino acid has been added to the sequence and recognize that it must therefore be removed from the chain. This situation once more eliminates for good the possibility of coincidence and chance. The Britannica Science Encyclopaedia, which is an outspoken defender of evolution, states that the amino acids of all living organisms on earth, and the building blocks of complex polymers such as proteins, have the same left-handed asymmetry. It adds that this is tantamount to tossing a coin a million times and always getting heads. The same encyclopaedia states that it is impossible to understand why molecules become left-handed or right-handed, and that this choice is fascinatingly related to the origin of life on earth.248 If a coin always turns up heads when tossed a million times, is it more logical to attribute that to chance, or else to accept that there is conscious intervention going on? The answer should be obvious. However, obvious though it may be, evolutionists still take refuge in coincidence, simply because they do not want to accept the existence of conscious intervention. A situation similar to the left-handedness of amino acids also exists with respect to nucleotides, the smallest units of the nucleic acids, DNA and RNA. In contrast to proteins, in which only left-handed amino acids are chosen, in the case of the nucleic acids, the preferred forms of their nucleotide components are always right-handed. This is another fact that can never be explained by chance. In conclusion, it is proven beyond a shadow of a doubt by the probabilities we have examined that the origin of life cannot be explained by chance. If we attempt to calculate the probability of an average-sized protein consisting of 400 amino acids being selected only from left-handed amino acids, we come up with a probability of 1 in 2400, or 10120. Just for a comparison, let us remember that the number of electrons in the universe is estimated at 1079, which although vast, is a much smaller number. The probability of these amino acids forming the required sequence and functional form would generate much larger numbers. If we add these probabilities to each other, and if we go on to work out the probabilities of even higher numbers and types of proteins, the calculations become inconceivable. The Indispensability of the Peptide Link The difficulties the theory of evolution is unable to overcome with regard to the development of a single protein are not limited to those we have recounted so far. It is not enough for amino acids to be arranged in the correct numbers, sequences, and required three-dimensional structures. The formation of a protein also requires that amino acid molecules with more than one arm be linked to each other only in certain ways. Such a bond is called a "peptide bond." Amino acids can make different bonds with each other; but proteins are made up of those-and only those-amino acids which are joined by peptide bonds. A comparison will clarify this point. Suppose that all the parts of a car were complete and correctly assembled, with the sole exception that one of the wheels was fastened in place not with the usual nuts and bolts, but with a piece of wire, in such a way that its hub faced the ground. It would be impossible for such a car to move even the shortest distance, no matter how complex its technology or how powerful its engine. At first glance, everything would seem to be in the right place, but the faulty attachment of even one wheel would make the entire car useless. In the same way, in a protein molecule the joining of even one amino acid to another with a bond other than a peptide bond would make the entire molecule useless. Research has shown that amino acids combining at random combine with a peptide bond only 50 percent of the time, and that the rest of the time different bonds that are not present in proteins emerge. To function properly, each amino acid making up a protein must be joined to others only with a peptide bond, in the same way that it likewise must be chosen only from among left-handed forms. The probability of this happening is the same as the probability of each protein's being left-handed. That is, when we consider a protein made up of 400 amino acids, the probability of all amino acids combining among themselves with only peptide bonds is 1 in 2399. If we add together the three probabilities (that of amino acids being laid out correctly, that of their all being left-handed, and that of their all being joined by peptide links), then we come face to face with the astronomical figure of 1 in 10950. This is a probability only on paper. Practically speaking, there is zero chance of its actually happening. As we saw earlier, in mathematics, a probability smaller than 1 in 1050 is statistically considered to have a "zero" probability of occurring. Even if we suppose that amino acids have combined and decomposed by a "trial and error" method, without losing any time since the formation of the earth, in order to form a single protein molecule, the time that would be required for something with a probability of 10950 to happen would still hugely exceed the estimated age of the earth. The conclusion to be drawn from all this is that evolution falls into a terrible abyss of improbability even when it comes to the formation of a single protein. One of the foremost proponents of the theory of evolution, Professor Richard Dawkins, states the impossibility the theory has fallen into in these terms: So the sort of lucky event we are looking at could be so wildly improbable that the chances of its happening, somewhere in the universe, could be as low as one in a billion billion billion in any one year. If it did happen on only one planet, anywhere in the universe, that planet has to be our planet-because here we are talking about it.249 This admission by one of evolution's foremost authorities clearly reflects the logical muddle the theory of evolution is built on. The above statements in Dawkins's book Climbing Mount Improbable are a striking example of circular reasoning which actually explains nothing: "If we are here, then that means that evolution happened." As we have seen, even the most prominent of the proponents of evolution confess that the theory is buried in impossibility when it comes to accounting for the first stage of life. But how interesting it is that, rather than accept the complete unreality of the theory they maintain, they prefer to cling to evolution in a dogmatic manner! This is a completely ideological fixation. Is There a Trial-and-Error Mechanism in Nature? Finally, we may conclude with a very important point in relation to the basic logic of probability calculations, of which we have already seen some examples. We indicated that the probability calculations made above reach astronomical levels, and that these astronomical odds have no chance of actually happening. However, there is a much more important and damaging fact facing evolutionists here. This is that under natural conditions, no period of trial and error can even start, despite the astronomical odds, because there is no trial-and-error mechanism in nature from which proteins could emerge. The calculations we gave above to demonstrate the probability of the formation of a protein molecule with 500 amino acids are valid only for an ideal trial-and-error environment, which does not actually exist in real life. That is, the probability of obtaining a useful protein is "1" in 10950 only if we suppose that there exists an imaginary mechanism in which an invisible hand joins 500 amino acids at random and then, seeing that this is not the right combination, disentangles them one by one, and arranges them again in a different order, and so on. In each trial, the amino acids would have to be separated one by one, and arranged in a new order. The synthesis should be stopped after the 500th amino acid has been added, and it must be ensured that not even one extra amino acid is involved. The trial should then be stopped to see whether or not a functional protein has yet been formed, and, in the event of failure, everything should be split up again and then tested for another sequence. Additionally, in each trial, not even one extraneous substance should be allowed to become involved. It is also imperative that the chain formed during the trial should not be separated and destroyed before reaching the 499th link. These conditions mean that the probabilities we have mentioned above can only operate in a controlled environment where there is a conscious mechanism directing the beginning, the end, and each intermediate stage of the process, and where only "the selection of the amino acids" is left to chance. It is clearly impossible for such an environment to exist under natural conditions. Therefore the formation of a protein in the natural environment is logically and technically impossible. Since some people are unable to take a broad view of these matters, but approach them from a superficial viewpoint and assume protein formation to be a simple chemical reaction, they may make unrealistic deductions such as "amino acids combine by way of reaction and then form proteins." However, accidental chemical reactions taking place in a nonliving structure can only lead to simple and primitive changes. The number of these is predetermined and limited. For a somewhat more complex chemical material, huge factories, chemical plants, and laboratories have to be involved. Medicines and many other chemical materials that we use in our daily life are made in just this way. Proteins have much more complex structures than these chemicals produced by industry. Therefore, it is impossible for proteins, each of which is a wonder of design and engineering, in which every part takes its place in a fixed order, to originate as a result of haphazard chemical reactions. Let us for a minute put aside all the impossibilities we have described so far, and suppose that a useful protein molecule still evolved spontaneously "by accident." Even so, evolution again has no answers, because in order for this protein to survive, it would need to be isolated from its natural habitat and be protected under very special conditions. Otherwise, it would either disintegrate from exposure to natural conditions on earth, or else join with other acids, amino acids, or chemical compounds, thereby losing its particular properties and turning into a totally different and useless substance. What we have been discussing so far is the impossibility of just one protein's coming about by chance. However, in the human body alone there are some 100,000 proteins functioning. Furthermore, there are about 1.5 million species named, and another 10 million are believed to exist. Although many similar proteins are used in many life forms, it is estimated that there must be 100 million or more types of protein in the plant and animal worlds. And the millions of species which have already become extinct are not included in this calculation. In other words, hundreds of millions of protein codes have existed in the world. If one considers that not even one protein can be explained by chance, it is clear what the existence of hundreds of millions of different proteins must mean. Bearing this truth in mind, it can clearly be understood that such concepts as "coincidence" and "chance" have nothing to do with the existence of living things. The Evolutionary Argument about the Origin of Life Above all, there is one important point to take into consideration: If any one step in the evolutionary process is proven to be impossible, this is sufficient to prove that the whole theory is totally false and invalid. For instance, by proving that the haphazard formation of proteins is impossible, all other claims regarding the subsequent steps of evolution are also refuted. After this, it becomes meaningless to take some human and ape skulls and engage in speculation about them. How living organisms came into existence out of nonliving matter was an issue that evolutionists did not even want to mention for a long time. However, this question, which had constantly been avoided, eventually had to be addressed, and attempts were made to settle it with a series of experiments in the second quarter of the twentieth century. The main question was: How could the first living cell have appeared in the primordial atmosphere on the earth? In other words, what kind of explanation could evolutionists offer? The first person to take the matter in hand was the Russian biologist Alexander I. Oparin, the founder of the concept of "chemical evolution." Despite all his theoretical studies, Oparin was unable to produce any results to shed light on the origin of life. He says the following in his book The Origin of Life, published in 1936: Unfortunately, however, the problem of the origin of the cell is perhaps the most obscure point in the whole study of the evolution of organisms.250 Since Oparin, evolutionists have performed countless experiments, conducted research, and made observations to prove that a cell could have been formed by chance. However, every such attempt only made the complex design of the cell clearer, and thus refuted the evolutionists' hypotheses even more. Professor Klaus Dose, the president of the Institute of Biochemistry at the University of Johannes Gutenberg, states: More than 30 years of experimentation on the origin of life in the fields of chemical and molecular evolution have led to a better perception of the immensity of the problem of the origin of life on earth rather than to its solution. At present all discussions on principal theories and experiments in the field either end in stalemate or in a confession of ignorance.251 In his book The End of Science, the evolutionary science writer John Horgan says of the origin of life, "This is by far the weakest strut of the chassis of modern biology."252 The following statement by the geochemist Jeffrey Bada, from the San Diego-based Scripps Institute, makes the helplessness of evolutionists clear: Today, as we leave the twentieth century, we still face the biggest unsolved problem that we had when we entered the twentieth century: How did life originate on Earth?253 Let us now look at the details of the theory of evolution's "biggest unsolved problem". The first subject we have to consider is the famous Miller experiment. The most generally respected study on the origin of life is the Miller experiment conducted by the American researcher Stanley Miller in 1953. (The experiment is also known as the "Urey-Miller experiment" because of the contribution of Miller's instructor at the University of Chicago, Harold Urey.) This experiment is the only "evidence" evolutionists have with which to allegedly prove the "chemical evolution thesis"; they advance it as the first stage of the supposed evolutionary process leading to life. Although nearly half a century has passed, and great technological advances have been made, nobody has made any further progress. In spite of this, Miller's experiment is still taught in textbooks as the evolutionary explanation of the earliest generation of living things. That is because, aware of the fact that such studies do not support, but rather actually refute, their thesis, evolutionist researchers deliberately avoid embarking on such experiments. Stanley Miller's aim was to demonstrate by means of an experiment that amino acids, the building blocks of proteins, could have come into existence "by chance" on the lifeless earth billions of years ago. In his experiment, Miller used a gas mixture that he assumed to have existed on the primordial earth (but which later proved unrealistic), composed of ammonia, methane, hydrogen, and water vapor. Since these gases would not react with each other under natural conditions, he added energy to the mixture to start a reaction among them. Supposing that this energy could have come from lightning in the primordial atmosphere, he used an electric current for this purpose. Miller heated this gas mixture at 100°C for a week and added the electrical current. At the end of the week, Miller analyzed the chemicals which had formed at the bottom of the jar, and observed that three out of the 20 amino acids which constitute the basic elements of proteins had been synthesized. This experiment aroused great excitement among evolutionists, and was promoted as an outstanding success. Moreover, in a state of intoxicated euphoria, various publications carried headlines such as "Miller creates life." However, what Miller had managed to synthesize was only a few inanimate molecules. Encouraged by this experiment, evolutionists immediately produced new scenarios. Stages following the development of amino acids were hurriedly hypothesized. Supposedly, amino acids had later united in the correct sequences by accident to form proteins. Some of these proteins which emerged by chance formed themselves into cell membrane-like structures which "somehow" came into existence and formed a primitive cell. These cells then supposedly came together over time to form multicellular living organisms. However, Miller's experiment has since proven to be false in many respects. Four Facts That Invalidate Miller's Experiment Miller's experiment sought to prove that amino acids could form on their own in primordial earth-like conditions, but it contains inconsistencies in a number of areas: 1- By using a mechanism called a "cold trap," Miller isolated the amino acids from the environment as soon as they were formed. Had he not done so, the conditions in the environment in which the amino acids were formed would immediately have destroyed these molecules. Doubtless, this kind of conscious isolation mechanism did not exist on the primordial earth. Without such a mechanism, even if one amino acid were obtained, it would immediately have been destroyed. The chemist Richard Bliss expresses this contradiction by observing that "Actually, without this trap, the chemical products, would have been destroyed by the energy source."254 And, sure enough, in his previous experiments, Miller had been unable to make even one single amino acid using the same materials without the cold trap mechanism. 2- The primordial atmosphere that Miller attempted to simulate in his experiment was not realistic. In the 1980s, scientists agreed that nitrogen and carbon dioxide should have been used in this artificial environment instead of methane and ammonia. So why did Miller insist on these gases? The answer is simple: without ammonia, it was impossible to synthesize any amino acid. Kevin Mc Kean talks about this in an article published in Discover magazine: Miller and Urey imitated the ancient atmosphere on the Earth with a mixture of methane and ammonia. ...However in the latest studies, it has been understood that the Earth was very hot at those times, and that it was composed of melted nickel and iron. Therefore, the chemical atmosphere of that time should have been formed mostly of nitrogen (N2), carbon dioxide (CO2) and water vapour (H2O). However these are not as appropriate as methane and ammonia for the production of organic molecules.255 The American scientists J. P. Ferris and C. T. Chen repeated Miller's experiment with an atmospheric environment that contained carbon dioxide, hydrogen, nitrogen, and water vapor, and were unable to obtain even a single amino acid molecule.256 3- Another important point that invalidates Miller's experiment is that there was enough oxygen to destroy all the amino acids in the atmosphere at the time when they were thought to have been formed. This fact, overlooked by Miller, is revealed by the traces of oxidized iron found in rocks that are estimated to be 3.5 billion years old.257 There are other findings showing that the amount of oxygen in the atmosphere at that time was much higher than originally claimed by evolutionists. Studies also show that the amount of ultraviolet radiation to which the earth was then exposed was 10,000 times more than evolutionists' estimates. This intense radiation would unavoidably have freed oxygen by decomposing the water vapor and carbon dioxide in the atmosphere. This situation completely negates Miller's experiment, in which oxygen was completely neglected. If oxygen had been used in the experiment, methane would have decomposed into carbon dioxide and water, and ammonia into nitrogen and water. On the other hand, in an environment where there was no oxygen, there would be no ozone layer either; therefore, the amino acids would have immediately been destroyed, since they would have been exposed to the most intense ultraviolet rays without the protection of the ozone layer. In other words, with or without oxygen in the primordial world, the result would have been a deadly environment for the amino acids. 4- At the end of Miller's experiment, many organic acids had also been formed with characteristics detrimental to the structure and function of living things. If the amino acids had not been isolated, and had been left in the same environment with these chemicals, their destruction or transformation into different compounds through chemical reactions would have been unavoidable. Moreover, Miller's experiment also produced right-handed amino acids.258 The existence of these amino acids refuted the theory even within its own terms, because right-handed amino acids cannot function in the composition of living organisms. To conclude, the circumstances in which amino acids were formed in Miller's experiment were not suitable for life. In truth, this medium took the form of an acidic mixture destroying and oxidizing the useful molecules obtained. All these facts point to one firm truth: Miller's experiment cannot claim to have proved that living things formed by chance under primordial earth-like conditions. The whole experiment is nothing more than a deliberate and controlled laboratory experiment to synthesize amino acids. The amount and types of the gases used in the experiment were ideally determined to allow amino acids to originate. The amount of energy supplied to the system was neither too much nor too little, but arranged precisely to enable the necessary reactions to occur. The experimental apparatus was isolated, so that it would not allow the leaking of any harmful, destructive, or any other kind of elements to hinder the formation of amino acids. No elements, minerals or compounds that were likely to have been present on the primordial earth, but which would have changed the course of the reactions, were included in the experiment. Oxygen, which would have prevented the formation of amino acids because of oxidation, is only one of these destructive elements. Even under such ideal laboratory conditions, it was impossible for the amino acids produced to survive and avoid destruction without the "cold trap" mechanism. In fact, by his experiment, Miller destroyed evolution's claim that "life emerged as the result of unconscious coincidences." That is because, if the experiment proves anything, it is that amino acids can only be produced in a controlled laboratory environment where all the conditions are specifically designed by conscious intervention. Today, Miller's experiment is totally disregarded even by evolutionist scientists. In the February 1998 issue of the famous evolutionist science journal Earth, the following statements appear in an article titled "Life's Crucible": Geologist now think that the primordial atmosphere consisted mainly of carbon dioxide and nitrogen, gases that are less reactive than those used in the 1953 experiment. And even if Miller's atmosphere could have existed, how do you get simple molecules such as amino acids to go through the necessary chemical changes that will convert them into more complicated compounds, or polymers, such as proteins? Miller himself throws up his hands at that part of the puzzle. "It's a problem," he sighs with exasperation. "How do you make polymers? That's not so easy."259 As seen, today even Miller himself has accepted that his experiment does not lead to an explanation of the origin of life. In the March 1998 issue of National Geographic, in an article titled "The Emergence of Life on Earth," the following comments appear: Many scientists now suspect that the early atmosphere was different to what Miller first supposed. They think it consisted of carbon dioxide and nitrogen rather than hydrogen, methane, and ammonia. That's bad news for chemists. When they try sparking carbon dioxide and nitrogen, they get a paltry amount of organic molecules - the equivalent of dissolving a drop of food colouring in a swimming pool of water. Scientists find it hard to imagine life emerging from such a diluted soup.260 In brief, neither Miller's experiment, nor any other similar one that has been attempted, can answer the question of how life emerged on earth. All of the research that has been done shows that it is impossible for life to emerge by chance, and thus confirms that life is created. The reason evolutionists do not accept this obvious reality is their blind adherence to prejudices that are totally unscientific. Interestingly enough, Harold Urey, who organized the Miller experiment with his student Stanley Miller, made the following confession on this subject: All of us who study the origin of life find that the more we look into it, the more we feel it is too complex to have evolved anywhere. We all believe as an article of faith that life evolved from dead matter on this planet. It is just that its complexity is so great, it is hard for us to imagine that it did.261 The Primordial Atmosphere and Proteins Evolutionist sources use the Miller experiment, despite all of its inconsistencies, to try to gloss over the question of the origin of amino acids. By giving the impression that the issue has long since been resolved by that invalid experiment, they try to paper over the cracks in the theory of evolution. However, to explain the second stage of the origin of life, evolutionists faced an even greater problem than that of the formation of amino acids-namely, the origin of proteins, the building blocks of life, which are composed of hundreds of different amino acids bonding with each other in a particular order. Claiming that proteins were formed by chance under natural conditions is even more unrealistic and unreasonable than claiming that amino acids were formed by chance. In the preceding pages we have seen the mathematical impossibility of the haphazard uniting of amino acids in proper sequences to form proteins with probability calculations. Now, we will examine the impossibility of proteins being produced chemically under primordial earth conditions. The Problem of Protein Synthesis in Water As we saw before, when combining to form proteins, amino acids form a special bond with one another called the peptide bond. A water molecule is released during the formation of this peptide bond. This fact definitely refutes the evolutionist explanation that primordial life originated in water, because, according to the "Le Châtelier principle" in chemistry, it is not possible for a reaction that releases water (a condensation reaction) to take place in a hydrous environment. The chances of this kind of a reaction happening in a hydrate environment is said to "have the least probability of occurring" of all chemical reactions. Hence the ocean, which is claimed to be where life began and amino acids originated, is definitely not an appropriate setting for amino acids to form proteins.262 On the other hand, it would be irrational for evolutionists to change their minds and claim that life originated on land, because the only environment where amino acids could have been protected from ultraviolet radiation is in the oceans and seas. On land, they would be destroyed by ultraviolet rays. The Le Châtelier principle, on the other hand, disproves the claim of the formation of life in the sea. This is another dilemma confronting evolution. Challenged by the above dilemma, evolutionists began to invent unrealistic scenarios based on this "water problem" that so definitively refuted their theories. Sydney Fox was one of the best known of these researchers. Fox advanced the following theory to solve the problem. According to him, the first amino acids must have been transported to some cliffs near a volcano right after their formation in the primordial ocean. The water contained in this mixture that included the amino acids must have evaporated when the temperature increased above boiling point on the cliffs. The amino acids which were "dried out" in this way, could then have combined to form proteins. However this "complicated" way out was not accepted by many people in the field, because the amino acids could not have endured such high temperatures. Research confirmed that amino acids are immediately destroyed at very high temperatures. But Fox did not give up. He combined purified amino acids in the laboratory, "under very special conditions," by heating them in a dry environment. The amino acids combined, but still no proteins were obtained. What he actually ended up with was simple and disordered loops of amino acids, arbitrarily combined with each other, and these loops were far from resembling any living protein. Furthermore, if Fox had kept the amino acids at a steady temperature, then these useless loops would also have disintegrated. Another point that nullified the experiment was that Fox did not use the useless end products obtained in Miller's experiment; rather, he used pure amino acids from living organisms. This experiment, however, which was intended to be a continuation of Miller's experiment, should have started out from the results obtained by Miller. Yet neither Fox, nor any other researcher, used the useless amino acids Miller produced. Fox's experiment was not even welcomed in evolutionist circles, because it was clear that the meaningless amino acid chains that he obtained (which he termed "proteinoids") could not have formed under natural conditions. Moreover, proteins, the basic units of life, still could not be produced. The problem of the origin of proteins remained unsolved. In an article in the popular science magazine, Chemical Engineering News, which appeared in the 1970s, Fox's experiment was mentioned as follows: Sydney Fox and the other researchers managed to unite the amino acids in the shape of "proteinoids" by using very special heating techniques under conditions which in fact did not exist at all in the primordial stages of Earth. Also, they are not at all similar to the very regular proteins present in living things. They are nothing but useless, irregular chemical stains. It was explained that even if such molecules had formed in the early ages, they would definitely be destroyed.263 Indeed, the proteinoids Fox obtained were totally different from real proteins, both in structure and function. The difference between proteins and these proteinoids was as huge as the difference between a piece of high-tech equipment and a heap of unprocessed iron. Furthermore, there was no chance that even these irregular amino acid chains could have survived in the primordial atmosphere. Harmful and destructive physical and chemical effects caused by heavy exposure to ultraviolet light and other unstable natural conditions would have caused these proteinoids to disintegrate. Because of the Le Châtelier principle, it was also impossible for the amino acids to combine underwater, where ultraviolet rays would not reach them. In view of this, the idea that the proteinoids were the basis of life eventually lost support among scientists. The Origin of the DNA Molecule Our examinations so far have shown that the theory of evolution is in a serious quandary at the molecular level. Evolutionists have shed no light on the formation of amino acids at all. The formation of proteins, on the other hand, is another mystery all its own. Yet the problems are not even limited just to amino acids and proteins: These are only the beginning. Beyond them, the extremely complex structure of the cell leads evolutionists to yet another impasse. The reason for this is that the cell is not just a heap of amino-acid-structured proteins, but rather the most complex system man has ever encountered. While the theory of evolution was having such trouble providing a coherent explanation for the existence of the molecules that are the basis of the cell structure, developments in the science of genetics and the discovery of nucleic acids (DNA and RNA) produced brand-new problems for the theory. In 1953, James Watson and Francis Crick launched a new age in biology with their work on the structure of DNA. The molecule known as DNA, which is found in the nucleus of each of the 100 trillion cells in our bodies, contains the complete blueprint for the construction of the human body. The information regarding all the characteristics of a person, from physical appearance to the structure of the inner organs, is recorded in DNA within the sequence of four special bases that make up the giant molecule. These bases are known as A, T, G, and C, according to the initial letters of their names. All the structural differences among people depend on variations in the sequences of these letters. In addition to features such as height, and eye, hair and skin colors, the DNA in a single cell also contains the design of the 206 bones, the 600 muscles, the 100 billion nerve cells (neurons), 1.000 trillion connections between the neurons of the brain, 97,000 kilometers of veins, and the 100 trillion cells of the human body. If we were to write down the information coded in DNA, then we would have to compile a giant library consisting of 900 volumes of 500 pages each. But the information this enormous library would hold is encoded inside the DNA molecules in the cell nucleus, which is far smaller than the 1/100th-of-a-millimeter-long cell itself. DNA Cannot Be Explained by Non-Design At this point, there is an important detail that deserves attention. An error in the sequence of the nucleotides making up a gene would render that gene completely useless. When it is considered that there are 200,000 genes in the human body, it becomes clearer how impossible it is for the millions of nucleotides making up these genes to have been formed, in the right sequence, by chance. The evolutionary biologist Frank Salisbury has comments on this impossibility: A medium protein might include about 300 amino acids. The DNA gene controlling this would have about 1,000 nucleotides in its chain. Since there are four kinds of nucleotides in a DNA chain, one consisting of 1,000 links could exist in 41,000 forms. Using a little algebra (logarithms) we can see that 41,000=10600. Ten multiplied by itself 600 times gives the figure 1 followed by 600 zeros! This number is completely beyond our comprehension.264 The number 41,000 is the equivalent of 10600. This means 1 followed by 600 zeros. As 1 with 12 zeros after it indicates a trillion, 600 zeros represents an inconceivable number. The impossibility of the formation of RNA and DNA by a coincidental accumulation of nucleotides is expressed by the French scientist Paul Auger in this way: We have to sharply distinguish the two stages in the chance formation of complex molecules such as nucleotides by chemical events. The production of nucleotides one by one-which is possible-and the combination of these within very special sequences. The second is absolutely impossible.265 For many years, Francis Crick believed in the theory of molecular evolution, but eventually even he had to admit to himself that such a complex molecule could not have emerged spontaneously by chance, as the result of an evolutionary process: An honest man, armed with all the knowledge available to us now, could only state that, in some sense, the origin of life appears at the moment to be almost a miracle.266 The Turkish evolutionist Professor Ali Demirsoy was forced to make the following confession on the issue: In fact, the probability of the formation of a protein and a nucleic acid (DNA-RNA) is a probability way beyond estimating. Furthermore, the chance of the emergence of a certain protein chain is so slight as to be called astronomic.267 A very interesting paradox emerges at this point: While DNA can only replicate with the help of special proteins (enzymes), the synthesis of these proteins can only be realized by the information encoded in DNA. As they both depend on each other, they have to exist at the same time for replication. Science writer John Horgan explains the dilemma in this way: DNA cannot do its work, including forming more DNA, without the help of catalyticproteins, or enzymes. In short, proteins cannot form without DNA, but neither can DNA form without proteins.268 This situation once again undermines the scenario that life could have come about by accident. Homer Jacobson, Professor Emeritus of Chemistry, comments: Directions for the reproduction of plans, for energy and the extraction of parts from the current environment, for the growth sequence, and for the effector mechanism translating instructions into growth-all had to be simultaneously present at that moment [when life began]. This combination of events has seemed an incredibly unlikely happenstance...269 The quotation above was written two years after the discovery of the structure of DNA by Watson and Crick. But despite all the developments in science, this problem for evolutionists remains unsolved. This is why German biochemist Douglas R. Hofstadter says: 'How did the Genetic Code, along with the mechanisms for its translation (ribosomes and RNA molecules), originate?' For the moment, we will have to content ourselves with a sense of wonder and awe, rather than with an answer.270 Stanley Miller and Francis Crick's close associate from the University of San Diego, California, the highly reputed evolutionist Dr. Leslie Orgel says in an article published in 1994: It is extremely improbable that proteins and nucleic acids, both of which are structurally complex, arose spontaneously in the same place at the same time. Yet it also seems impossible to have one without the other. And so, at first glance, one might have to conclude that life could never, in fact, have originated by chemical means.271 Alongside all of this, it is chemically impossible for nucleic acids such as DNA and RNA, which possess a definite string of information, to have emerged by chance, or for even one of the nucleotides which compose them to have come about by accident and to have survived and maintained its unadulterated state under the conditions of the primordial world. Even the famous journal Scientific American, which follows an evolutionist line, has been obliged to confess the doubts of evolutionists on this subject: Even the simpler molecules are produced only in small amounts in realistic experiments simulating possible primitive earth conditions. What is worse, these molecules are generally minor constituents of tars: It remains problematical how they could have been separated and purified through geochemical processes whose normal effects are to make organic mixtures more and more of a jumble. With somewhat more complex molecules these difficulties rapidly increase. In particular a purely geochemical origin of nucleotides (the subunits of DNA and RNA) presents great difficulties.272 Of course, the statement "it is quite impossible for life to have emerged by chemical means" simply means that life is the product of an intelligent design. This "chemical evolution" that evolutionists have been talking about since the beginning of the last century never happened, and is nothing but a myth. But most evolutionists believe in this and similar totally unscientific fairy tales as if they were true, because accepting intelligent design means accepting creation-and they have conditioned themselves not to accept this truth. One famous biologist from Australia, Michael Denton, discusses the subject in his book Evolution: A Theory in Crisis: To the skeptic, the proposition that the genetic programmes of higher organisms, consisting of something close to a thousand million bits of information, equivalent to the sequence of letters in a small library of 1,000 volumes, containing in encoded form countless thousands of intricate algorithms controlling, specifying, and ordering the growth and development of billions and billions of cells into the form of a complex organism, were composed by a purely random process is simply an affront to reason. But to the Darwinist, the idea is accepted without a ripple of doubt - the paradigm takes precedence!273 The Invalidity of the RNA World The discovery in the 1970s that the gases originally existing in the primitive atmosphere of the earth would have rendered amino acid synthesis impossible was a serious blow to the theory of molecular evolution. Evolutionists then had to face the fact that the "primitive atmosphere experiments" by Stanley Miller, Sydney Fox, Cyril Ponnamperuma and others were invalid. For this reason, in the 1980s the evolutionists tried again. As a result, the "RNA World" hypothesis was advanced. This scenario proposed that, not proteins, but rather the RNA molecules that contained the information for proteins, were formed first. According to this scenario, advanced by Harvard chemist Walter Gilbert in 1986, inspired by the discovery about "ribozymes" by Thomas Cech, billions of years ago an RNA molecule capable of replicating itself formed somehow by accident. Then this RNA molecule started to produce proteins, having been activated by external influences. Thereafter, it became necessary to store this information in a second molecule, and somehow the DNA molecule emerged to do that. Made up as it is of a chain of impossibilities in each and every stage, this scarcely credible scenario, far from providing any explanation of the origin of life, only magnified the problem, and raised many unanswerable questions: 1. Since it is impossible to accept the coincidental formation of even one of the nucleotides making up RNA, how can it be possible for these imaginary nucleotides to form RNA by coming together in a particular sequence? Evolutionist John Horgan admits the impossibility of the chance formation of RNA; As researchers continue to examine the RNA-World concept closely, more problems emerge. How did RNA initially arise? RNA and its components are difficult to synthesize in a laboratory under the best of conditions, much less under really plausible ones.274 2. Even if we suppose that it formed by chance, how could this RNA, consisting of just a nucleotide chain, have "decided" to self-replicate, and with what kind of mechanism could it have carried out this self-replicating process? Where did it find the nucleotides it used while self-replicating? Even evolutionist microbiologists Gerald Joyce and Leslie Orgel express the desperate nature of the situtation in their book In the RNA World: This discussion… has, in a sense, focused on a straw man: the myth of a self-replicating RNA molecule that arose de novo from a soup of random polynucleotides. Not only is such a notion unrealistic in light of our current understanding of prebiotic chemistry, but it would strain the credulity of even an optimist's view of RNA's catalytic potential.275 3. Even if we suppose that there was self-replicating RNA in the primordial world, that numerous amino acids of every type ready to be used by RNA were available, and that all of these impossibilities somehow took place, the situation still does not lead to the formation of even one single protein. For RNA only includes information concerning the structure of proteins. Amino acids, on the other hand, are raw materials. Nevertheless, there is no mechanism for the production of proteins. To consider the existence of RNA sufficient for protein production is as nonsensical as expecting a car to assemble itself by simply throwing the blueprint onto a heap of parts piled up on top of each other. A blueprint cannot produce a car all by itself without a factory and workers to assemble the parts according to the instructions contained in the blueprint; in the same way, the blueprint contained in RNA cannot produce proteins by itself without the cooperation of other cellular components which follow the instructions contained in the RNA. Proteins are produced in the ribosome factory with the help of many enzymes, and as a result of extremely complex processes within the cell. The ribosome is a complex cell organelle made up of proteins. This leads, therefore, to another unreasonable supposition-that ribosomes, too, should have come into existence by chance at the same time. Even Nobel Prize winner Jacques Monod, who was one of the most fanatical defenders of evolution-and atheism-explained that protein synthesis can by no means be considered to depend merely on the information in the nucleic acids: The code is meaningless unless translated. The modern cell's translating machinery consists of at least 50 macromolecular components, which are themselves coded in DNA: the code cannot be translated otherwise than by products of translation themselves. Click to view image: '175965-192.jpg' |Liveleak on Facebook|
http://www.liveleak.com/view?i=79d_1209008701
13
94
This is a support page to the multimedia chapter Oscillations. It gives background information and further details. Oscillations are important in their own right. They are also involved in waves. This movie clip shows a torsional wave – first a travelling wave, then a standing wave. The oscillation of one point on the wave is highlighted. We discuss it below. In the slow motion movie clip at right, the mass glides on an air track. The track is perforated with small holes, through which flows air from the inside, where the pressure is above atmospheric. So the mass is supported, like a hovercraft, on a cushion of air, and friction is eliminated. Because the speed is small, air resistance is very small. Consequently, the only non-negigible force in the horizontal direction is that exerted by the two springs. Because there are no vertical displacements, we discuss here only the horizontal displacement. At the equilibrium position (x = 0 in the graph below the clip), the forces exerted by the two springs are equal in magnitude but opposite in direction, so the total force is zero. To the right of equilibrium, the force acts to accelerate the mass to the left, and vice versa. (The graph is rotated 90° from its normal orientation so that we can compare it with the motion.) Let's begin (as do the graph and the animation) with the mass to the right of equilibrium and at rest. Let's see what happens when I release it: First, the spring force acts to the left and mass is accelerated towards x = 0. When it reaches x = 0, it has a velocity and therefore a momentum to the left. (Near equilbrium, the forces are small, so there is a region near x = 0 over which the velocity changes little: the x(t) graph is almost straight.) When it arrives at x = 0, because of its momentum to the left, it overshoots, i.e. it continues travelling to the left. While it is to the left of x = 0, however, the spring force acts to the right. This force gradually slows the mass until it stops. The point at which it stops is, of course, its maximum displacement to the left. Once it is stopped on the left hand side of equilibrium, the spring force accelerates it to the right, so the velocity and momentum to the right increase. When it reaches equilibrium again, it now has its maximum rightwards momentum. It overshoots and continues to the right. The spring force now acts to the left, so it decelerates until it stops at its maximum rightwards displacement. In the first movie shown shown at right, the mass is released from rest, so the amplitude is maximal (x = A) at t = 0, so the required phase constant is φ = π/2. (Indeed, for this particular case, we could say that the curve is a cos function rather than a sine.) x1 = A sin (ωt + π/2) = A cos (ωt ) In the second movie shown at right, however, the mass is given an impulsive start, so the initial condition approximates maximum velocity and x = 0 at t = 0. This requires φ = 0, so x2 = A sin (ωt + 0) = A sin (ωt) Here we start with an initial velocity, which is v0 = dx2/dt = A sin ωt = Aω Note that the initial condition determines both φ and A. In both these clips, a rotating line (an animated phasor diagram) is used to show that Simple Harmonic Motion is the projection onto one dimension of circular motion. This is explained in detail in the Kinematics of Simple Harmonic Motion in Physclips. Phasors are commonly used to facilitate calculations in AC circuits. We saw above that x = A sin (ωt + φ), where ω2 = k/m . The cyclic frequency is f = 1/T, where T is the period. The sine function goes through one complete cycle when its argument increases by 2π, so we require that (ω(t+T) + φ) − (ωt + φ) = 2π, so ωT = 2π, so ω = 2π/T = 2πf = (k/m)½ . This parameter is determined by the system: the particular mass and spring used. For a linear system, the frequency is independent of amplitude (see below, however, a for nonlinear system ). Compare the oscillations shown in the two clips at right. The first uses one air track glider and the second uses two similar gliders, so the mass is doubled. The period is increased by about 40%, i.e. by a factor of √2, so the frequency is decreased by the same factor. Though it is not so easy to see in the video, at right we have used stiffer springs with a higher value of k. Here the period is shorter and therefore the frequency higher that in all the previous examples. Because we know x, the displacement from equilibrium, we know the potential energy U, which is just that of a linear spring. Taking the zero of potential energy at x = 0, U = ½ kx2. Here, x = A sin (ωt + φ) so U = ½ kx2 = ½ k A2 sin2 (ωt + φ). Because we know v, the velocity in the x direction, we know the kinetic energy K. v = ωA cos (ωt + φ) so K = ½ mv2 = ½ m ω2A2 cos2 (ωt + φ). Adding kinetic and potential energies gives the mechanical energy, E. Using the expressions above, and substituting ω2 = k/m , we have E = U + K = ½ kx2 + ½ mv2 = ½ kA2 cos2 (ωt + φ) + ½ k A2 sin2 (ωt + φ). Now we can use the identity sin2 θ + cos2 θ = 1, which gives E = U + K = ½ kA2, which is a constant: it does not depend on time. This is because of the air track: here, no non-negligible nonconservative forces act, so mechanical energy is conserved. U (in purple, like x), K (in red, like v) and E (in black) are shown as functions of time in the animated graph at right: the mechanical energy is continuously exchanged between potential and kinetic. At the extrema of the motion, where |x| = A, the velocity is zero, so A mass m hangs from a light, inextensible string of length L, which is large+ compared to the dimensions of the mass, so the mass can be considered as a particle. The horizontal displacement of m from the position of equilibrium is x, and the string makes an angle θ with the vertical, as shown in the sketch. For the moment, we consider only the case in which θ is small, so x << L. Applying Newton's second law in the vertical upwards direction, we have* |T| cos θ − mg = may Because θ is small, the vertical acceleration ay is negligible. Further, cos θ, which may be expanded as cos θ ≅ 1 − ½ θ2 +..., is approximately 1. So the magnitude |T| of the tension in the string is approximately mg. Applying Newton's second law in the horizontal direction, − |T| sin θ = max = m d2x/dt2. Then we write sin θ = x/L.Substituting this in the preceding equation and rearranging gives d2x/dt2 = − (g/L)x or d2x/dt2 = − ω2 x, where we define ω2 = g/L. This, of course, is the differential equation we have solved above and elsewhere. Its solution is x = A sin (ωt + φ), where ω = (g/L)½ Writing T as the period (not to be confused with the magnitude of the tension |T|), we write 1/T = f = ω/2π = (1/2π)(g/L)½. Here, the potential energy is gravitational. If we take the lowest point of the pendulum (y = 0) as the reference of U, then, making the same substitutions as above, U = mgy = mgL(1 − cos θ) ≅ mgL(1 − (1 − ½ θ2)) so U ≅ ½ mgx2. The energy terms are illustrated with histograms at right. The reference for potential energy is arbitrary, as we have suggested in the animation. +The importance of this condition is that its rotational kinetic energy can be neglected in comparison with its translational kinetic energy and gravitational potential energy. We shall also neglect the slow rotation of the earth. * Notice that, in this equation, we have used the symbol 'm' in two conceptually different ways: the m in mg is the gravitational mass, the quantity that interacts with gravitational fields. The m in ma is the inertial mass, the quantity that resists acceleration. See this link for more discussion. Pendulums are easy to make and their periods can be measured accurately. Further, because they are only simple harmonic oscillators in the small angle approximation analysed above, they provide a good system for showing the effects of nonlinearity. That is the purpose of the movie clips below, which show how, for the nonlinear oscillator, the period varies with amplitude and, at large amplitudes, the motion is not sinusoidal. This is an example of an oscillation that is harmonic, but not simple harmonic. Periodic motion is motion that repeats: after a certain time T, called the period, the motion repeats, or x(t+T) = x(t). Periodic motion is called harmonic motion and may be expressed as a sum of harmonics. We shall discuss harmonics later, when we meet standing waves in one dimension. Mean while, see What is a sound spectrum? and How harmonic are harmonics? In practice, nonconservative forces are usually present, so mechanical energy is lost over each cycle. The type of loss that is most commonly analysed is that produced by a force proportional to the velocity, but in the opposite direction. Analysing that case in one dimension, we would write Floss = − bv = − b dx/dt. Let's add this term to the analysis given above. Newton's second law is Σ F = m.d2x/dt2 , which gives the differential equation m.d2x/dt2 = − kx − b dx/dt, or d2x/dt2 + 2β dx/dt + ω02 x = 0 , where ω02 = k/m and β = b/2m. We can verify by subsitution that this differential equation has a solution x = A e−βt sin (ωt + φ), where ω2 = ω02 − β2. So, for this particular damping force, we should expect an oscillation whose amplitude decreases exponentially with time. Forces proportional to velocity arise from the viscosity of simple Newtonian fluids, if motion is sufficiently slow. However, losses encountered in nature are frequently more complicated. In the case below, the pendulum is mounted on a roller bearing. The loss force in this case has a dependence on v that is less strong than proportionality. What is happening here? Why is the decay faster than exponential? Liquids with long chain molecules frequently exhibit non-Newtonian viscosity. When the velocity gradient is large enough, shear forces tend to align the molecules at right angles to the velocity gradient, which reduces the viscosity. I speculate that the grease in the bearing is behaving in this manner. The relative infrequency of linear losses in condensed phases is mathematically inconvenient, because the linear equation is much easier to handle analytically. Linear damping does exist, but real systems are rarely that simple. In the analysis above, we have included the restoring force (spring in the first case, gravity for the pendulum). Other, time-dependent forces may also be present. We could include these as F(t) and write Analysing that case in one dimension, we would write d2x/dt2 + 2β dx/dt + ω02 x = F(t)/m . The externally applied force F(t) might be a simple oscillation or might have oscillating components. This gives rise to the phenomenon of resonance. The apparatus shown at right has a set of pendulums of different lengths attached to the same shaft via rods that rotate with the shaft. On the far end of the shaft is a pendulum with much larger mass, similarly attached. The oscillation of the massive pendulum tends to rotate the shaft at an angular frequency ω = (g/L)½, where L is its length. This rotation produces an external, time dependent force on each of the small pendulums, each of which has its own characteristic frequency ω0. While the movie clip is downloading, make a prediction of what you think will happen to the small pendulums. How will the work done be this external force depend on ω0? If the force is applied in the same direction as the motion, then the work done will be positive. If ω ≠ ω0, then we might expect that, over several periods, there would be some periods for which the velocity and the external force were in the smae direction, but these would tend to cancel out. On the other hand, if ω ≅ ω0, and if the phase between them were suitable, we can imagine cases where the work done over several cycles would be large. ω0 is called the resonance frequency of the system. The two clips below show the importance of the frequency and phase of the external force. Let's be quantitative, by adding an external force (F0 sin ωt) to our previous analysis, which yields the equation d2x/dt2 + 2β dx/dt + ω02 x = (1/m) F0 sin ωt . where again ω0 = k/m. As written, this equation applies to an external force applied over a very long time: we have made no mention of when and how the force starts. So let's consider what happens in the quiescent state, the state over which the average work being done by (F0 sin ωt) equals the average rate at which energy is being dissipated by the nonconservative force (Floss = − bv = − (2β/m) dx/dt). We can verify by substitution that the solution to this equation is x = A sin (ωt + φ), where A = (F0 /m)((ω2 − ω02)2 + (2βω)2)−½ and where tan φ = 2βω/(1 −(ω/ω0)2). Note that, in this quiescent state, the amplitude is large if ω0 ≅ ω0 and if the loss term, ωβ, is small, i.e. if dissipative forces are not doing work at a large rate. At resonance (i.e. when ω = ω0), the amplitude is A = F0/2βωm. Sometimes the loss term, βω, is hard to measure directly, so instead we measure the Quality factor, Q, defined as Q = ω0/Δω, the ratio of the resonant frequency to the bandwidth, where Δω is the difference between the frequencies that give half maximum power, or amplitudes reduced by √2. Using the expression above for A, the half power points occur when ω2 − ω02 = 2βω. Solving this quadratic gives ω = β ±√(β2+ω02). For reasonably high Q, i.e. when Δω << ω0, we can use the binomial expansion to give ω ≅ ω0(1±β/ω0), which gives β ≅ 2ω0/Q. This then gives the amplitude at resonance as A0 ≅ F0Q/4mω02 = F0/4mω0Δω. This makes qualitative sense: the amplitude is obviously large for large F and small m, it is small at high frequency when there is not enough time per cycle to displace it much, and large if the resonance is strong, i.e. if the Q factor is high or the bandwidth low. These expressions apply to a system with linear losses, where the dissipative force is proportional to velocity, as is the case for vicosity. The nonlinear losses that one often meets in practice yield equations that (e.g. dynamic drag) are rather more difficult to solve and are not treated here. In the equation above, φ is the phase by which x(t) is ahead of F(t). At low frequency (ω << ω0), φ is near zero: negligible force is required to accelerate the mass, so the driving force simply pushes the spring: F = kx. Conversely, at high frequency (ω >> ω0), φ is near 180° and the force and displacement are in antiphase: the driving force is in phase with the acceleration, because the acceleration term dominates when ω is high. At resonance (ω = ω0), φ = 90° and the driving force is in phase with the velocity. We do not discuss here the transient behaviour: the way a system responds when the external force is 'turned on' or 'turned off'. Examples are shown, however, in the movie clips above. We have treated the mass on springs and the pendulm mass as though they were particles: objects with no size or zero dimensions. The displacement of a particle can be written as x(t), a function of time alone. For extended objects that are not completely rigid, there is the possibility of oscillation with amplitudes and phases that vary within the object. A simple example is an ideal string, extended in the x direction, whose transverse displacement can be written as y(x,t). This animation shows a wave travelling to the right (green line) and another, of equal frequency and amplitude, travelling to the left (blue line). In a linear medium, these add to give a standing wave, shown here as a red line that represents (with the vertical scale exaggerated) a wave on a string fixed at the position of the two vertical lines. The physics and musical applications of strings are discussed in Strings, standing waves and harmonics from our Music Acoustics site. The movie clip below shows a wave in a one-dimensional medium. The waves in this clip are torsional waves θ = θ(x,t). The straight line across the image is a strip of steel (a blade from a band saw), to which are attached the long arms that make the angular displacement visible. For convenience, the gravitational field has been rotated 90° in this clip. We shall study these effects in the chapter on standing waves. For the moment, however, let's look quickly at other examples. In three dimensions, we can exite oscillations with displacement ξ = ξ(x,y,z,t). In general, this is hard to show on a two-dimensional screen, except in animation. The clip drop of water shown below at left was made by Don Pettit in free fall, in the International Space Station. The motion is complicated, but slow because the restoring force is the surface tension of water, which is rather small on this scale. The clip linked below right is a very famous film, regularly used to remind engineering students of the importance of resonances in structures.
http://www.animations.physics.unsw.edu.au/jw/oscillations.htm
13
270
A Glossary of Frequently Misused or Misunderstood Physics Terms and Concepts.By Donald E. Simanek, Lock Haven University. Technical terms of science have very specific meanings. Standard dictionaries are not always the best source of useful and correct definitions of them. Accurate. Conforming closely to some standard. Having very small error of any kind. See: Uncertainty. Compare: precise. Absolute uncertainty. The uncertainty in a measured quantity is due to inherent variations in the measurement process itself. The uncertainty in a result is due to the combined and accumulated effects of these measurement uncertainties that were used in the calculation of that result. When an uncertainty is expressed in the same units as the quantity itself it are called an absolute uncertainty. Uncertainty values are usually attached to the quoted value of an experimental measurement or result, one common format being: (quantity) ± (absolute uncertainty in that quantity). Compare: relative uncertainty. Action. This technical term is an historic relic of the 17th century, before energy and momentum were understood. In modern terminology, action has the dimensions of energy×time. Planck's constant has those dimensions, and is therefore sometimes called Planck's quantum of action. Pairs of measurable quantities whose product has dimensions of energy×time are called conjugate quantities in quantum mechanics, and have a special relation to each other, expressed in Heisenberg's uncertainty principle. Unfortunately the word action persists in textbooks in meaningless statements of Newton's third law: "Action equals reaction." This statement is useless to the modern student, who hasn't the foggiest idea what action is. See: Newton's 3rd law for a useful definition. Also see Heisenberg's uncertainty principle. Avogadro's constant. Avogadro's constant has the unit mole-1. It is not merely a number, and should not be called Avogadro's number. It is correct to say that the number of particles in a gram-mole is 6.02 x 1023. Some older books call this value Avogadro's number, and when that is done, no units are attached to it. This can be confusing and misleading to students who are conscientiously trying to learn how to balance units in equations. One must specify whether the value of Avogadro's constant is expressed for a gram-mole or a kilogram-mole. A few books prefer a kilogram-mole. The unit name for a gram-mole is simply mol. The unit name for a kilogram-mole is kmol. When the kilogram-mole is used, Avogadro's constant should be written: 6.02252 x 1026 kmol-1. The fact that Avogadro's constant has units further convinces us that it is not "merely a number." Though it seems inconsistent, the SI base unit is the gram-mole. As Mario Iona reminds me, SI is not an MKS system. Some textbooks still prefer to use the kilogram-mole, or worse, use it and the gram-mole. This affects their quoted values for the universal gas constant and the Faraday Constant.Is Avogadro's constant just a number? What about those textbooks which say "You could have a mole of stars, grains of sand, or people." In science we do use entities which are just numbers, such as p, e, 3, 100, etc. Though these are used in science, their definitions are independent of science. No experiment of science can ever determine their value, except approximately. Avogadro’s constant, however, must be determined experimentally, for example by counting the number of atoms in a crystal. The value of Avogadro's number found in handbooks is an experimentally determined number. You won't discover its value experimentally by counting stars, grains of sand, or people. You find it only by counting atoms or molecules in something of known relative molecular mass. And you won't find it playing any role in any law or theory about stars, sand, or people. The reciprocal of Avogadro's constant is numerically equal to the unified atomic mass unit, u, that is, 1/12 the mass of the carbon 12 atom. 1 u = 1.66043 x 10-27 kg = 1/6.02252 x 1023 mole-1. Because. Here's a word best avoided in physics. Whenever it appears one can be almost certain that it's a filler word in a sentence which says nothing worth saying, or a word used when one can't think of a good or specific reason. While the use of the word because as a link in a chain of logical steps is benign, one should still replace it with words more specifically indicative of the type of link which is meant. See: Why? Illustrative fable: The seeker after truth sought wisdom from a Guru who lived as a hermit on top of a Himalayan mountain. After a long and arduous climb to the mountain-top the seeker was granted an audience. Sitting at the feet of the great Guru, the seeker humbly said: "Please, answer for me the eternal question: Why?" The Guru raised his eyes to the sky, meditated for a bit, then looked the seeker straight in the eye and answered, with an air of sagacious profundity, "Because!"Capacitance. The capacitance of a physical capacitor is measured by this procedure: Put equal and opposite charges on the capacitor's plates and then measure the potential between the plates. Then C = |Q/V|, where Q is the charge on one of the plates. Capacitors for use in circuits consist of two conducting bodies (plates). We speak of a capacitor as "charged" when it has charge Q on one plate, and -Q on the other. Of course the net charge of the entire object is zero; that is, the charged capacitor hasn't had net charge added to it, but has undergone an internal separation of charge. Unfortunately this process is usually called charging the capacitor, which is misleading because it suggests adding charge to the capacitor. In fact, this process usually consists of transfering charge from one plate to the other. The capacitance of a single object, say an isolated sphere, is determined by considering the other plate to be an infinite sphere surrounding it. The object is given charge, by moving charge from the infinite sphere, which acts as an infinite charge reservoir ("ground"). The potential of the object is the potential between the object and the infinite sphere. Capacitance depends only on the geometry of the capacitor's physical structure and the dielectric constant of the material medium in which the capacitor's electric field exists. The size of the capacitor's capacitance is the same whatever the charge and potential (assuming the dielectric constant doesn't change). This is true even if the charge on both plates is reduced to zero, and therefore the capacitor's potential is zero. If a capacitor with charge on its plates has a capacitance of, say, 2 microfarad, then its capacitance is also 2 microfarad when the plates have no charge. This should remind us that C = |Q/V| is not by itself the definition of capacitance, but merely a formula which allows us to relate the capacitance to the charge and potential when the capacitor plates have equal and opposite charge on them. A common misunderstanding about electrical capacitance is to assume that capacitance represents the maximum amount of charge a capacitor can store. That is misleading because capacitors don't store charge (their total charge being zero). They "separate charge" so that their plates have equal and opposite charge. It is also wrong because the maximum charge one may put on a capacitor plate is determined by the potential at which dielectric breakdown occurs. Compare: capacity. We probably should avoid the phrases "charged capacitor", "charging a capacitor" and "store charge". Some have suggested the alternative expression "energizing a capacitor" because the process is one of giving the capacitor electrical potential energy by rearranging charges on it (or within it). Some who agree with most everything I have said on this topic still defend "stored charge". They say that the capacitor circuit separates charge and then stores equal and opposite charges on the capacitor plates presumably for release by discharge through a circuit (rather than by discharge within the capacitor). That's a correct description for it puts the capacitor in the context of the circuit to which it is attached. But the abbreviated phrase "The capacitor stores charge" is still misleading and should be avoided unless it is explained as I have done here. And it's still more to the point to say the capacitor stores electrical potential energy. Capacity. This word is properly used in names of quantities which express the relative amount of some quantity with respect to a another quantity upon which it depends. For example, heat capacity is dU/dT, where U is the internal energy and T is the temperature. Electrical capacity, usually called capacitance is another example: C = |dQ/dV|, where Q is the magnitude of charge on each capacitor plate and V is the potential difference between the plates. Consistent use of the word "capacitance" for C avoids this conceptual error. But the same misconceptions can occur with the others, and we don't have other names for them which might help avoid this. Heat capacity isn't the maximum amount of heat something can have. That would also incorrectly suggest that heat is a "substance", which it isn't. Centrifugal force. When a non-inertial rotating coordinate system is used to analyze motion, Newton's law F = ma is not correct unless one adds to the real forces a fictitious force called the centrifugal force. The centrifugal force required in the non-inertial system is equal and opposite to the centripetal force calculated in the inertial system. Since the centrifugal and centripetal forces are concepts used in two different formulations of the problem, they can not in any sense be considered a pair of reaction forces. Also, they act on the same body, not different bodies. See: centripetal force, action, and inertial systems. Centripetal force. The centripetal force is the radial component of the net force acting on a body when the problem is analyzed in an inertial system. The force is inward toward the instantaneous center of curvature of the path of the body. The size of the force is mv2/r, where r is the instantaneous radius of curvature. See: centrifugal force. cgs. The system of units based upon the fundamental metric units: centimeter, gram and second. Classical physics. The physics developed before about 1900, before we knew about relativity and quantum mechanics. See: modern physics. Closed system. A physical system on which no outside influences act; closed so that nothing gets in or out of the system and nothing from outside can influence the system's observable behavior or properties. Obviously we could never make measurements on a closed system unless we were in it†, for no information about it could get out of it! In practice we loosen up the condition a bit, and only insist that there be no interactions with the outside world that would affect those properties of the system that are being studied. † Besides, when the experimenter is a part of the system, all sorts of other problems arise. This is a dilemma physicists must deal with: the fact that if we take measurements, we are a part of the system, and must be very certain that we carry out experiments so that fact doesn't distort or prejudice the results.Conserved. A quantity is said to be conserved if under specified conditions it's value does not change with time. Example: In a closed system, the charge, mass, total energy, linear momentum and angular momentum of the system are conserved. Philosophers debate whether mass and energy are fundamentally the same thing, and whether we should have a conservation of mass-energy law. If you want to learn more about this, see The Equivalence of Mass and Energy.Current. The time rate at which charge passes through a circuit element or through a fixed place in a conducting wire, I = dq/dt. Misuse alert. A very common mistake found in textbooks is to speak of "flow of current". Current itself is a flow of charge; what, then, could "flow of current" mean? It is either redundant, misleading, or wrong. This expression should be purged from our vocabulary. Compare a similar mistake: "The velocity moves West." Sounds absurd, doesn't it?Data. The word data is the plural of datum. Examples of correct usage: "The data are reasonable, considering the…"Dependent variable. See variable. Derive. To derive a result or conclusion is to show, using logic and mathematics, how a conclusion follows logically from certain assumed facts and principles. Dimensions. The fundamental measurables of a unit system in physicsthose which are defined through operational definitions. All other measurable quantities in physics are defined through mathematical relations to the fundamental quantities. Therefore any physical measurable may be expressed as a mathematical combination of the dimensions. See: operational definitions. Example: In the MKSA (meter-kilogram-second-ampere) system of units, length, mass, time and current are the fundamental measurables, symbolically represented by L, M, T, and I. Therefore we say that velocity has the dimensions LT-1. Energy has the dimensions ML2T-2.Discrepancy. (1) Any deviation or departure from the expected. (2) A difference between two measurements or results. (3) A difference between an experimental determination of a quantity and its standard or accepted value, usually called the experimental discrepancy. Empirical law. A law strictly based on experimental data, describing the relations in that data. A law generally describes a very specific and limited phenomenon, and does not have the broader scope of a theory. Electricity. This word names a branch or subdivision of physics, just as other subdivisions are named ‘mechanics’, ‘thermodynamics’, ‘optics’, etc. Misuse alert: Sometimes the word electricity is colloquially misused as if it named a physical quantity, such as "The capacitor stores electricity," or "Electricity in a resistor produces heat." Such usage should be avoided! In all such cases there's available a more specific or precise word, such as "The capacitor stores electrical energy," "The resistor is heated by the electric current," and "The utility company charges me for the electric energy I use." (I am not being charged based on the power, so these companies shouldn't call themselves Power companies. Some already have changed their names to something like "... Energy")Energy. Energy is a property associated with a material body. Energy is not a material substance. When bodies interact, the energy of one may increase at the expense of the other, and this is sometimes called a transfer of energy. This does not mean that we could intercept this energy in transit and bottle some of it. After the transfer one of the bodies may have higher energy than before, and we speak of it as having "stored energy". But that doesn't mean that the energy is "contained in it" in the same sense as water in a bucket. Misuse example: "The earth's aurorasthe northern and southern lightsillustrate how energy from the sun travels to our planet." Science News, 149, June 1, 1996. This sentence blurs understanding of the process by which energetic charged particles from the sun interact with the earth's magnetic field and our atmosphere, causing the light seen in auroras.The statement "Energy is a property of a body" needs clarification. As with many things in physics, the size of the energy depends on the coordinate system. A body moving with speed V in one coordinate system has kinetic energy ½mV2. The same body has zero kinetic energy in a coordinate system moving along with it at speed V. Since no inertial coordinate system can be considered "special" or "absolute", we shouldn't say "The kinetic energy of the body is ..." but should say "The kinetic energy of the body moving in this reference frame is ..." Energy (take two). Elementary textbooks often say "there are many forms of energy, kinetic, potential, thermal, nuclear, etc. They can be converted from one form to another." Let's try to put more structure to this. There are really only two functional categories of energy. The energy associated with particles or systems can be said to be either kinetic energy or potential energy. Systems may exchange energy in two ways, through work or heat. Work and heat are never in a body or system, they measure the energy transfered during interactions between systems. Work always requires motion of a system or parts of it, moving the system's center of mass. Heating does not require macroscopic motion of either system. It involves exchanges of energy between systems on the microscopic level, and does not move the center of mass of either system. Equal. [Not all "equals" are equal.] The word equal and the symbol "=" have many different uses. The dictionary warns that equal things are "alike or in agreement in a specified sense with respect to specified properties." So we must be careful about the specified sense and specified properties. The meaning of the mathematical symbol, "=" depends upon what stands on either side of it. When it stands between vectors it symbolizes that the vectors are equal in both size and direction. In algebra the equal sign stands between two algebraic expressions and indicates that two expressions are related by a reflexive, symmetric and transitive relation. The mathematical expressions on either side of the "=" sign are mathematically identical and interchangeable in equations. When the equal sign stands between two mathematical expressions with physical meaning, it means something quite different than when standing between two numbers. In physics we may correctly write 12 inches = 1 foot, but to write 12 = 1 is simply wrong. In the first case, the equation tells us about physically equivalent measurements. It has physical meaning, and the units are an indispensable part of the quantity. When we write a = dv/dt, we are defining the acceleration in terms of the time rate of change of velocity. One does not verify a definition by experiment. Experiment can, however, show that in certain cases (such as a freely falling body) the acceleration of the body is constant. The three-lined equal sign, º, is often used to mean "defined equal to". When we write F = ma, we are expressing a relation between measurable quantities, one which holds under specified conditions, qualifications and limitations. There's more to it than the equation. One must, for example, specify that all measurements are made in an inertial frame, for if they aren't, this relation isn't correct as it stands, and must be modified. Many physical laws, including this one, also include definitions. This equation may be considered a definition of force, if m and a are previously defined. But if F was previously defined, this may be taken as a definition of mass. But the fact that this relation can be experimentally tested, and possibly be shown to be false (under certain conditions) demonstrates that it is more than a mere definition. Additional discussion of these points may be found in Arnold Arons' book A Guide to Introductory Physics Teaching, section 3.23, listed in the references at the end of this document. Usage note: When reading equations aloud we often say, "F equals m a". This, of course, says that the two things are mathematically equal in equations, and that one may replace the other. It is not saying that F is physically the same thing as ma. Perhaps equations were not meant to be read aloud, for the spoken word does not have the subtleties of meaning necessary for the task. At least we should realize that spoken equations are at best a shorthand approximation to the meaning; a verbal description of the symbols. If we were to try to speak the physical meaning, it would be something like: "Newton's law tells us that the net vector force acting on a body of mass m is mathematically equal to the product of its mass and its vector acceleration." In a textbook, words like that would appear in the text near the equation, at least on the first appearance of the equation.Error. In colloquial usage, "a mistake". In technical usage error is a synonym for the experimental uncertainty in a measurement or result. See: uncertainty. Error analysis. The mathematical analysis (calculations) done to show quantitatively how uncertainties in data produce uncertainty in calculated results, and to find the sizes of the uncertainty in the results. [In mathematics the word analysis is synonymous with calculus, or "a method for mathematical calculation." Calculus courses used to be named Analysis.] Extensive property. A measurable property of a thermodynamic system is extensive if, when two identical systems are combined into one, the value of that property of the combined system is double its original value in each system. Examples: mass, volume, number of moles. See: intensive variable and specific. Experimental error. The uncertainty in the value of a quantity. This may be found from (1) statistical analysis of the scatter of data, or (2) mathematical analysis showing how data uncertainties affect the uncertainty of calculated results. Misuse alert: In elementary lab manuals one often sees: experimental error = |your value - book value| /book value. This should be called the experimental discrepancy. See: discrepancy.Factor. One of several things multiplied together. Misuse alert: Be careful that the reader does not confuse this with the colloquial usage: "One factor in the success of this experiment was…"Fictitious force. See: inertial frames. Focal point. The focal point of a lens is defined by considering a narrow beam of light incident upon the lens, parallel to the optic (symmetry) axis of the lens and centered on that axis. The focal point is that point to which the rays converge or from which they diverge after passing through the lens. The convergent case defines a converging (positive) lens. The second case defines a diverging (negative) lens. It’s easy to tell which kind of lens you have, for converging lenses are thicker at their center than at the edges, and diverging lenses are thinner at the center than at the edges. FPS. The system of units based on the fundamental units of the ‘English system’: foot, pound and second. Function. A relation between the elements of one set, X (the domain), and the elements of another set, Y (the range), such that for each element in the domain X there's only one corresponding element in the range Y. When a function is written in the form of an equation relating values of variables, y = y(x), y must be single-valued, that is each value of x corresponds to only one value of y. While y = x2 is a function, x = y1/2 is not. Both equations express relations, however. Experimental science deals with mathematical relations between measurements. Physical laws express these relations. Physical theories often include entities that are defined to be functions of other quantities. Scientists often use the word function colloquially in the sense of "depends on" as in "Pressure is a function of volume and temperature", when they really mean just "Pressure depends on volume and temperature." Heat. Heat, like work, is a measure of the amount of energy transferred from one body to another because of the temperature difference between those bodies. Heat is not energy possessed by a body. We should not speak of the "heat in a body." The energy a body possesses due to its temperature is a different thing, called internal thermal energy. The misuse of this word probably dates back to the 18th century when it was still thought that bodies undergoing thermal processes exchanged a substance, called caloric or phlogiston, a substance later called heat. We now know that heat is not a substance. Reference: Zemansky, Mark W. The Use and Misuse of the Word "Heat" in Physics Teaching" The Physics Teacher, 8, 6 (Sept 1970) p. 295-300. See: work. Heisenberg's Uncertainty Principle. Pairs of measurable quantities whose product has dimensions of energy×time are called conjugate quantities in quantum mechanics, and have a special relation to each other, expressed in Heisenberg's uncertainty principle. It says that the product of the uncertainties of the two quantities is no smaller than h/2p. So if you improve the measurement precision of one quantity the precision of the other gets worse. Misuse alert: Folks who don't pay attention to details of science, are heard to say "Heisenberg showed that you can't be certain about anything." We also hear some folk justifying belief in esp or psychic phenomena by appeal to the Heisenberg principle. This is wrong on several counts. (1) The precision of any measurement is never perfectly certain, and we knew that before Heisenberg. (2) The Heisenberg uncertainty principle tells us we can measure anything with arbitrarily small precision, but in the process some other measurement gets worse. (3) The uncertainties involved here affect only microscopic (atomic and molecular level phenomena) and have no applicability to the macroscopic phenomena of everyday life.Hypothesis. An untested statement about nature; a scientific conjecture, or educated guess. Elementary textbooks often declare that a hypothesis is made prior to doing the experiments designed to test it. However, we must recognize that experiments sometimes reveal unexpected and puzzling things, motivating one to then explore various hypotheses that might serve to explain the experiments. Further testing of the hypotheses under other conditions is then in order, as always. Compare: law and theory. Ideal-lens equation. 1/p + 1/q = 1/f, where p is the distance from object to lens, q is the distance from lens to image, and f is the focal length of the lens. This equation has important limitations, being only valid for thin lenses, and for paraxial rays. Thin lenses have thickness small compared to p, q, and f. Paraxial rays are those which make angles small enough with the optic axis that the approximation (angle in radian measure) = sin(angle) may be used. See: optical sign conventions, and image. Independent variable. See variable. Inertia A descriptive term for that property of a body that resists change in its motion. Two kinds of changes of motion are recognized: changes in translational motion, and changes in rotational motion. In modern usage, the measure of translational inertia is mass. Newton's first law of motion is sometimes called the "Law of Inertia", a label which adds nothing to the meaning of the first law. Newton's first and second laws together are required for a full description of the consequences of a body's inertia. The measure of a body's resistance to rotation is its Moment of Inertia. See: moment of inertia, Misuse alert: One sometimes sees "A force arises because of inertia." This misleads one into supposing that the inertia is a cause of the force. It is not hard to discuss all of the physics of force, mass and acceleration without ever using the word "inertia". Unfortunately we are stuck with it in the widely used name "moment of inertia".Inertial frame. A non-accelerating coordinate system. One in which F = ma holds, where F is the sum of all real forces acting on a body of mass m whose acceleration is a. In classical mechanics, the real forces on a body are those which are due to the influence of another body. [Or, forces on a part of a body due to other parts of that body.] Contact forces, gravitational, electric, and magnetic forces are real. Fictitious forces are those which arise solely from formulating a problem in a non-inertial system, in which ma = F + (fictitious force terms) Intensive variable. A measurable property of a thermodynamic system is intensive if when two identical systems are combined into one, the variable of the combined system is the same as the original value in each system. Examples: temperature, pressure. See: extensive variable, and specific. Image: A point mapping of luminous points of an object located in one region of space to points in another region of space, formed by refraction or reflection of light in a manner which causes light from each point of the object to converge to or diverge from a point somewhere else (on the image).The images that are useful generally have the character that adjacent points of the object map to adjacent points of the image without discontinuity, and is a recognizable (though perhaps distorted) mapping of the object. This qualification allows for anamorphic images, that are stretched or compressed in one direction, as well as the sort of distorted (but recognizable) images you see in a fun-house mirror. See: real image and virtual image. Lens. A transparent object with two refracting surfaces. Usually the surfaces are flat or spherical. Sometimes, to improve image quality, lenses are deliberately made with surfaces that depart slightly from spherical (aspheric lenses). Kinetic energy. The energy a body has by virtue of its motion. The kinetic energy is the work done by an external force to bring the body from rest to a particular state of motion. See: work. Common misconception: Many students think that kinetic energy is defined by ½mv2. It is not. That happens to be approximately the kinetic energy of objects moving slowly, at small fractions of the speed of light. If the body is moving at relativistic speeds, its kinetic energy is g mc2, which can be expressed as ½ mv2 + an infinite series of terms. g 2 = 1/(1-(v/c)2), where c is the speed of light in a vacuum.Macro-. A prefix meaning ‘large’. See: micro- Macroscopic. A physical entity or process of large scale, the scale of ordinary human experience. Specifically, any phenomena in which the individual molecules and atoms are neither measured, nor explicitly considered in the description of the phenomena. See: microscopic. Two kinds of magnification are useful to describe optical systems and they must not be confused, since they aren't synonymous. Any optical system that produces a real image from a real object is described by its linear magnification. Any system that one looks through to view a virtual image is described by its angular magnification. These have different definitions, and are based on fundamentally different concepts. Linear Magnification is the ratio of the size of the object to the size of the image. Angular Magnification is the ratio of the angular size of the object as seen through the instrument to the angular size of the object as seen with the 'naked eye' under the best viewing conditions. The 'naked eye' view is without use of the optical instrument, but under optimal viewing conditions. Certain 'gotchas' lurk here. What are 'optimal' conditions? Usually this means the conditions in which the object's details can be seen most clearly. For a small object held in the hand, this would be when the object is brought as close as possible and still seen clearly, that it, to the near point of the eye, about 25 cm for normal eyesight. For a distant mountain, one can't bring it close, so when determining the magnification of a telescope, we assume the object is very distant, essentially at infinity. And what is the 'optimal' position of the image? For the simple magnifier, in which the magnification depends strongly on the image position, the image is best seen at the near point of the eye, 25 cm. For the telescope, the image size doesn't change much as you fiddle with the focus, so you likely will put the image at infinite distance for relaxed viewing. The microscope is an intermediate case. Always striving for greater resolution, the user may pull the image close, to the near point, even though that doesn't increase its size very much. But usually, users will place the image farther away, at the distance of a meter or two, or even at infinity. But, because the object is very near the focal point, the magnification is only weakly dependent on image position. Some texts express angular magnification as the ratio of the angles, some express it as the ratio of the tangents of the angles. If all of the angles are small, there's negligible difference between these two definitions. However, if you examine the derivation of the formula these books give for the magnification of a telescope fo/fe, you realize that they must have been using the tangents. The tangent form of the definition is the traditionally correct one, the one used in science and industry, for nearly all optical instruments that are designed to produce images which preserve the linear geometry of the object. Micro-. A prefix meaning ‘small’, as in ‘microscope’, ‘micrometer’, ‘micrograph’. Also, a metric prefix meaning 10-6. See: macro- Microscopic. A physical entity or process of small scale, too small to directly experience with our senses. Specifically, any phenomena on the molecular and atomic scale, or smaller. See: macroscopic. MKSA. The system of physical units based on the fundamental metric units: meter kilogram, second and ampere. The transition from classical physics to modern physics was gradual, over about 30 years. Classical physics is still a part of physics, and the demarcation between classical and modern physics relates to the size and character of the systems studied. Classical physics applies to bodies of sizes larger than atoms and moleculels, moving at speeds much slower than the speed of light. Quantum mechanics applies at size scales of atoms or smaller. Relativity is necessary at speeds near the speed of light.See: classical physics. Mole. The term mole is short for the name gram-molar-weight; it is not a shortened form of the word molecule. (However, the word molecule does also derive from the word molar.) See: Avogadro’s constant. Misuse alert: Many books emphasize that the mole is "just a number," a measure of the number of particles in a collection. They say that one can have a mole of any kind of particles, baseballs, atoms, stars, grains of sand, etc. It doesn't have to be molecules. This is misleading.Molecular mass. The molecular mass of something is the mass of one mole of it (in cgs units), or one kilomole of it (in MKS units). The units of molecular mass are gram and kilogram, respectively. The cgs and MKS values of molecular mass are numerically equal. The molecular mass is not the mass of one molecule. Some books still call this the molecular weight. One dictionary definition of molar is "Pertaining to a body of matter as a whole: contrasted with molecular and atomic." The mole is a measure appropriate for a macroscopic amount of material, as contrasted with a microscopic amount (a few atoms or molecules). See: mole, Avogadro's constant, microscopic, macroscopic. Moment of Inertia. A property of a body that relates its angular velocity about a particular axis to the net torque on the body about that axis. τ = Iω. The moment of inertia is very much dependent on the chosen axis, for it may have a different value for different axes. In fact, the moment of inertia is best expressed as a three dimensional array (matrix) of values measured with respect to a three dimensional coordinate system. There is always one particular coordinate system in which this matrix is diagonal, having only three distinct values along its diagonal, and zeros elsewhere. These axes are called the principal axes of the body, and the three values are the principal moments of the body. This may be thought of as analogous to Newton's second law F = m a, where m (mass) is a measure of translational inertia, and I (moment of inertia) is rotational inertia. But always be suspicious of analogies, except as memory clues. The moments of inertia of an extended body can be calculated directly by volume integrals taken over the volume of the body. The formula is I = ∫r2dm where r is the perpendicular distance from the mass element dm to the chosen axis. Newton's first and second laws of motion. F = d(mv)/dt. F is the net (total) force acting on the body of mass m. The individual forces acting on m must be summed vectorially. In the special case where the mass is constant, this becomes F = ma. Newton's third law of motion. When body A exerts a force on body B, then B exerts and equal and opposite force on A. The two forces related by this law act on different bodies. The forces in Newton's third need not be net forces, but because forces sum vectorially, Newton's third is also true for net forces on a body. Ohm's law. V = IR, where V is the potential across a circuit element, I is the current through it, and R is its resistance. This is not a generally applicable definition of resistance. It is only applicable to ohmic resistors, those whose resistance R is constant over the range of interest and V obeys a strictly linear relation to I. Materials are said to be ohmic when V depends linearly on R. Metals are ohmic so long as one holds their temperature constant. But changing the temperature of a metal changes R slightly. (More than slightly if it melts!) When the current changes rapidly, as when turning on a lamp, or when using AC sources, non-linear and non-ohmic behavior can be observed. For non-ohmic resistors, R is current-dependent and the definition R = dV/dI is far more useful. This is sometimes called the dynamic resistance. Solid state devices such as thermistors are non-ohmic and non-linear. A thermistor's resistance decreases as it warms up, so its dynamic resistance is negative. Tunnel diodes and some electrochemical processes have a complicated I-V curve with a negative resistance region of operation. The dependence of resistance on current is partly due to the change in the device's temperature with increasing current, but other subtle processes also contribute to change in resistance in solid state devices. Operational definition. A definition that describes an experimental procedure by which a numeric value of the quantity may be determined. See dimensions. Example: Length is operationally defined by specifying a procedure for subdividing a standard of length into smaller units to make a measuring stick, then laying that stick on the object to be measured, etc....Very few quantities in physics need to be operationally defined. They are the fundamental quantities, which include length, mass and time. Other quantities are defined from these through mathematical relations. Optical sign conventions. In introductory (freshman) courses in physics a sign convention is used for objects and images in which the lens equation must be written 1/p + 1/q = 1/f. Often the rules for this sign convention are presented in a convoluted manner. A simple and easy to remember rule is this: p is the object-to-lens distance. q is the lens to image distance. The coordinate axis along the optic axis is in the direction of passage of light through the lens, this defining the positive direction. Example: If the axis and the light direction is left-to-right (as is usually done) and the object is to the left of the lens, the object-to-lens distance is positive. If the object is to the right of the lens (virtual object), the object-to-lens distance is negative. It works the same for images. For refractive surfaces, define the surface radius to be the directed distance from a surface to its center of curvature. Thus a surface convex to the incident light is positive, one concave to the incident light is negative. The surface equation is then n/s + n'/s' = (n'-n)/R where s and s' are the object and image distances, and n and n' the refractive index of the incident and emergent media, respectively. For mirrors, the equation is usually written 1/s + 1/s' = 2/R = 1/f. A diverging mirror is convex to the incoming light, with negative f. From this fact we conclude that R is also negative. This form of the equation is consistent with that of the lens equation, and the interpretation of sign of focal length is the same also. But violence is done to the definition of R we used above, for refraction. One can say that the mirror folds the length axis at the mirror, so that emergent rays to a real image at the left represent a positive value of s'. We are forced also to declare that the mirror also flips the sign of the surface radius. For reflective surfaces, the radius of curvature is defined to be the directed distance from a surface to its center of curvature, measured with respect to the axis used for the emergent light. With this qualification the convention for the signs of s' and R is the same for mirrors as for refractive surfaces. In advanced optics courses, a cartesian sign convention is used in which all things to the left of the lens are negative, all those to the right are positive. When this is used, the lens equation must be written 1/p + 1/f = 1/q. (The sign of the 1/p term is opposite that in the other sign convention). This is a particularly meaningful version, for 1/p is the measure of vergence (convergence or divergence) of the rays as they enter the lens, 1/f is the amount the lens changes the vergence, and 1/q is the vergence of the emergent rays. Particle. This word, lifted from colloquial usage, means different things in science, depending on the context. To the Greek philosophers it meant a "little piece" of matter, and Democritus taught that these pieces, that he called "atoms" had different geometric shapes that governed how they could combine and link together. This idea was speculative, and not supported by any specific experiments or evidence. The "atomic theory" didn't arise until the 19th century, motivated primarily by the emerging science of chemistry, though at first some scientists rejected the reality of atoms, considering atoms to be no more than a "useful fiction" since they weren't directly observable. In the early 20th century the Bohr theory gave a detailed picture of atoms as something like "miniature solar systems" of electrons orbiting an incredibly small and dense nucleaus. This "classical" picture proved to be misleadingly simplistic, though it is still the "picture" in most people's minds when they think of atoms. Since then experimentalists have identified a whole "zoo" of particles that arise in nuclear reactions. But are these "little pieces" of matter as the Greeks thought? Or are they a convenient fiction to describe what we measure with increasingly sophisticated "particle detectors". Perhaps what we are measuring is nothing more than "events" resulting from complex interactions of wave functions. Did you really expect a definite and final answer to this here? Pascal's Principle of Hydrostatics. Pascal actually has three separate principles of hydrostatics. When a textbook refers to Pascal's Principle it should specify which is meant. Pascal 1: The pressure at any point in a liquid exerts force equally in all directions. This shorthand slogan means that an infinitesimal surface area placed at that point will experience the same force due to pressure no matter what its orientation. Pascal 2: When pressure is changed (increased or decreased) at any point in a homogenous, incompressible fluid, all other points experience the same change of pressure. Except for minor edits and insertion of the words 'homogenous' and 'incompressible', this is the statement of the principle given in John A. Eldridge's textbook College Physics (McGraw-Hill, 1937). Yet over half of the textbooks I've checked, including recent ones, omit the important word 'changed'. Some textbooks add the qualification 'enclosed fluid'. This gives the false impression that the fluid must be in a closed container, which isn't a necessary condition of Pascal's principle at all. Some of these textbooks do indicate that Pascal's principle applies only to changes in pressure, but do so in the surrounding text, not in the bold, highlighted, and boxed statement of the principle. Students, of course, read the emphasized statement of the principle and not the surrounding text. Few books give any examples of the principle applied to anything other than enclosed liquids. The usual example is the hydraulic press. Too few show that Pascal's principle is derivable in one step from Bernoulli's equation. Therefore students have the false impression that these are independent laws. Pascal 3. The hydraulic lever. The hydraulic jack is a problem in fluid equilibrium, just as a pulley system is a problem in mechanical equilibrium (no accelerations involved). It's the static situation in which a small force on a small piston balances a large force on a large piston. No change of pressure need be involved here. A constant force on one piston slowly lifts a different piston with a constant force on it. At all times during this process the fluid is in near-equilibrium. This "principle" is no more than an application of the definition of pressure as F/A, the quotient of net force to the area over which the force acts. However, it also uses the principle that pressure in a fluid is uniform throughout the fluid at all points of the same height. This hydraulic jack lifting process is done at constant speed. If the two pistons are at different levels, as they usually are in real jacks used for lifting, there's a pressure difference between the two pistons due to height difference rgh where r is the density of the liquid. In textbook examples this is generally considered small enough to neglect and may not even be mentioned. Pascal's own discussion of the principle is not concisely stated and can be misleading if hastily read. See his On the Equilibrium of Liquids, 1663. He introduces the principle with the example of a piston as part of an enclosed vessel and considers what happens if a force is applied to that piston. He concludes that each portion of the vessel is pressed in proportion to its area. He does mention parenthetically that he is "excluding the weight of the water..., for I am speaking only of the piston's effect." Percentage. Older dictionaries suggested that percentage be used when a non-quantitative statement is being made: "The percentage growth of the economy was encouraging." But use percent when specifying a numerical value: "The gross national product increased by 2 percent last year." One other use of "percentage" is proper, however. When comparing a percent measure which changes, it's common to express that change in "percentage points." For example, if the unemployment rate is 5% one month, and 6% the next, we say "Unemployment increased by one percentage point". The absolute change in unemployment was, however, an increase of 20 percent. The average person hearing such figures seldom stops to think what the words mean, and many people think that "percent" and "percentage point" are synonyms. They are not. This is one more reason to avoid using the word "percentage" when expressing percent measures. The term "percentage point" is almost never used in the sciences. (Unless you consider economics a science.) Students in the sciences, unaware of this distinction will say "The experimental percentage uncertainty in our result was 9%." Perhaps they are trying to "sound profound". In view of the above discussion, this isn't what the student meant. The student should have simply said: "The experimental uncertainty in our result was 9%." Related note: Students have the strange idea that results are better when expressed as percents. Some experimental uncertainties must not be expressed as percents. Examples: (1) temperature in Celsius or Fahrenheit measure, (2) index of refraction, (3) dielectric constants. These measurables have arbitrarily chosen ‘fixed points’. Consider a 1 degree uncertainty in a temperature of 99 degrees C. Is the uncertainty 1%? Consider the same error in a measurement of 5 degrees. Is the uncertainty now 20%? Consider how much smaller the percent would be if the temperature were expressed in degrees Kelvin. This shows that percent uncertainty of Celsius and Fahrenheit temperature measurements is meaningless. However, the absolute (Kelvin) temperature scale has a physically meaningful fixed point (absolute zero), rather than an arbitrarily chosen one, and in some situations a percent uncertainty of an absolute temperature is meaningful. Per unit. In my opinion this expression is a barbarism best avoided. When a student is told that electric field is force per unit charge and in the MKS system one unit of charge is a coulomb (a huge amount) must we obtain that much charge to measure the field? Certainly not. In fact, one must take the limit of F/q as q goes to zero. Simply say: "Force divided by charge" or "F over q" or even "force per charge". Unfortunately there is no graceful way to say these things, other than simply writing the equation. Per is one of those frustrating words in English. The American Heritage Dictionary definition is: "To, for, or by each; for every." Example: "40 cents per gallon." We must put the blame for per unit squarely on the scientists and engineers. Precise. Sharply or clearly defined. Having small experimental uncertainty. A precise measurement may still be inaccurate, if there were an unrecognized determinate error in the measurement (for example, a miscalibrated instrument). Compare: accurate. Proof. A term from logic and mathematics describing an argument from premise to conclusion using strictly logical principles. In mathematics, theorems or propositions are established by logical arguments from a set of axioms, the process of establishing a theorem being called a proof. The colloquial meaning of ‘proof’ causes many problems in physics discussions and is best avoided. Since mathematics is such an important part of physics, the mathematician’s meaning of proof should be the only one we use. Also, we often ask students in upper level courses to do proofs of certain theorems of mathematical physics, and we are not asking for experimental demonstration! So, in a laboratory report, we should not say "We proved Newton's law." Rather say, "Today we demonstrated (or verified) the validity of Newton's law in the particular case of…" Science doesn't prove, but it can disprove. See: Why? Radioactive material. A material whose nuclei spontaneously give off nuclear radiation. Naturally radioactive materials (found in the earth's crust) give off alpha, beta, or gamma particles. Alpha particles are Helium nuclei, beta particles are electrons, and gamma particles are high energy photons. Radioactive. A word distinguishing radioactive materials from those which aren't. Usage: "U-235 is radioactive; He-4 is not." Note: Radioactive is least misleading when used as an adjective, not as a noun. It is sometimes used in the noun form as an shortened stand-in for radioactive material, as in the example above.Radioactivity. The process of emitting particles from the nucleus. Usage: "Certain materials found in nature demonstrate radioactivity." Misuse alert: Radioactivity is a process, not a thing, and not a substance. It is just as incorrect to say "U-235 emits radioactivity" as it is to say "current flows." A malfunctioning nuclear reactor does not release radioactivity, though it may release radioactive materials into the surrounding environment. A patient being treated by radiation therapy does not absorb radioactivity, but does absorb some of the radiation (alpha, beta, gamma) given off by the radioactive materials being used.Rate. A quantity of one thing compared to a quantity of another. [Dictionary definition] In physics the comparison is generally made by taking a quotient. Thus speed is defined to be the dx/dt, the ‘time rate of change of position’. Common misuse: We often hear non-scientists say such things as "The car was going at a high rate of speed." This is redundant at best, since it merely means "The car was moving at high speed." It is the sort of mistake made by people who don't think while they talk.Ratio. The quotient of two similar quantities. In physics, the two quantities must have the same units to be ‘similar’. Therefore we may properly speak of the ratio of two lengths. But to say "the ratio of charge to mass of the electron" is improper. The latter is properly called "the specific charge of the electron." See: specific. Reaction. Reaction forces are those equal and opposite forces of Newton's Third Law. Though they are sometimes called an action and reaction pair, one never sees a single force referred to as an action force. See: Newton’s Third Law. Real force. See: inertial frame. Real image. The point(s) to which light rays converge as they emerge from a lens or mirror. See: virtual image. Real object. The point(s) from which light rays diverge as they enter a lens or mirror. See: virtual object. Reality. We say that science studies the "real" world of perception and measurement. If we can apprehend something with our senses, or measure it, we treat it as "real". We have learned not to completely trust our unaided senses, for we know that we can be fooled by illusions, so we rely more on specially designed measuring instruments. Yet much of the language of science has entities that are not directly observable by our senses, such as "energy", and "momentum". These are, however, directly related to observables and defined through exact equations. Philosophers may argue whether the "real" world exists, but so long as our sense impressions and measurements of this real world are shared by independent observers and are precisely repeatable, we can do physics without philosophical concerns. Relation A rule of correspondence between the set of values of one quantity to the values of another quantity, often (but not always) expressible as an equation. See function. Relative. Colloquially "compared to". In the theory of relativity observations of moving observers are quantitatively compared. These observers obtain different values when measuring the same quantities, and these quantities are said to be relative. The theory, however, shows us how the differing measured values are precisely related to the relative velocity of the two observers. Some quantities are found to be the same for all observers, and are called invariant. One postulate of relativity theory is that the speed of light is an invariant quantity. When the theory is expressed in four dimensional form, with the appropriate choice of quantities, new invariant quantities emerge: the world-displacement (x + y + z + ict), the energy-momentum four-vector, and the electric and magnetic potentials may be combined into an invariant four-vector. Thus relativity theory might properly be called invariance theory. Misuse alert: One hears some folks with superficial minds say "Einstein showed that everything is relative." In fact, special relativity shows that only certain measurable things are relative, but in a precisely and mathematically specific way, and other things are, not relative, for all observers agree on them.Relative uncertainty. The uncertainty in a quantity compared to the quantity itself, expressed as a ratio of the absolute uncertainty to the size of the quantity. It may also be expressed as a percent uncertainty. The relative uncertainty is dimensionless and unitless. See: absolute uncertainty. Rigid body. Classical mechanics textbooks have a chapter on the mechanics of perfectly rigid bodies, but may fail to define what they are. If one thinks about it, one must conclude that there's no such thing as a perfectly rigid body. All bodies are compressible because of the inherent atomic structure of materials. Even if you look at purely classical phenomena, such as the collision of two billiard balls, the observed physics couldn't happen if the bodies were perfectly rigid. (The forces at impact would have to be infinite.) The "rigid body" assumption is a mathematical convenience that is useful and gives correct results for many important phenomena, much as it is sometimes useful to analyze systems by assuming that friction is negligible. Scale-limited. A measuring instrument is said to be scale-limited if the experimental uncertainty in that instrument is smaller than the smallest division readable on its scale. Therefore the experimental uncertainty is taken to be half the smallest readable increment on the scale. Specific. In physics and chemistry the word specific in the name of a quantity usually means divided by an extensive measure that is, divided by a quantity representing an amount of material. Specific volume means volume divided by mass, which is the reciprocal of the density. Specific heat capacity is the heat capacity divided by the mass. See: extensive, and capacity. Tele-. A prefix meaning at a distance, as in telescope, telemetry, television. Term. One of several quantities which are added together. Confusion can arise with another use of the word, as when one is asked to “Express the result in terms of mass and time.” This means that the result is “dependent on mass and time,” obviously it doesn’t mean that mass and time are to be added as terms. Truth. This is a word best avoided entirely in physics except when placed in quotes, or with careful qualification. Its colloquial use has so many shades of meaning from ‘it seems to be correct’ to the absolute truths claimed by religion, that it’s use causes nothing but misunderstanding. Someone once said "Science seeks proximate (approximate) truths." Others speak of provisional or tentative truths. Certainly science claims no final or absolute truths. Theoretical. Describing an idea which is part of a theory, or a consequence derived from theory. Misuse alert: Do not call an authoritative or ‘book’ value of a physical quantity a theoretical value, as in: "We compared our experimentally determined value of index of refraction with the theoretical value and found they differed by 0.07." The value obtained from index of refraction tables comes not from theory, but from experiment, and therefore should not be called theoretical. The word theoretically suffers the same abuse. Only when a numeric value is a prediction from theory, can one properly refer to it as a "theoretical value".Theory. A well-tested mathematical model of some part of science. In physics a theory usually takes the form of an equation or a group of equations, along with explanatory rules for their application. Theories are said to be successful if (1) they synthesize and unify a significant range of phenomena; (2) they have predictive power, either predicting new phenomena, or suggesting a direction for further research and testing. Compare: hypothesis, and law. Uncertainty. Synonym: error. A measure of the inherent variability of repeated measurements of a quantity. A prediction of the probable variability of a result, based on the inherent uncertainties in the data, found from a mathematical calculation of how the data uncertainties would, in combination, lead to uncertainty in the result. This calculation or process by which one predicts the size of the uncertainty in results from the uncertainties in data and procedure is called error analysis. See: absolute uncertainty and relative uncertainty. Uncertainties are always present; the experimenter’s job is to keep them as small as required for a useful result. We recognize two kinds of uncertainties: indeterminate and determinate. Indeterminate uncertainties are those whose size and sign are unknown, and are sometimes (misleadingly) called random. Determinate uncertainties are those of definite sign, often referring to uncertainties due to instrument miscalibration, bias in reading scales, or some unknown influence on the measurement. "Uncertainty" and "error" have colloqual meanings as well. Examples: "I have some uncertainty how to proceed." "The answer isn't reasonable; I must have made an error (mistake or blunder)." Units. Labels which distinguish one type of measurable quantity from other types. Length, mass and time are distinctly different physical quantities, and therefore have different unit names, meters, kilograms and seconds. We use several systems of units, including the metric (SI) units, the English (or U.S. customary units) , and a number of others of mainly historical interest. Note: Some dimensionless quantities are assigned unit names, some are not. Specific gravity has no unit name, but density does. Angles are dimensionless, but have unit names: degree, radian, grad. Some quantities which are physically different, and have different unit names, may have the same dimensions, for example, torque and work. Compare: dimensions. Much confusion exists about the meanings of dependent and independent variables. In one sense this distinction hinges on how you write the relation between variables. (1) If you write a function or relation in the form y = f(x), y is considered dependent on x and x is said to be the independent variable. (2) If one variable (say x) in a relation is experimentally set, fixed, or held to particular values while measuring corresponding values of y, we call x the independent variable. We could just as well (in some cases) set values of y and then determine corresponding values of x. In that case y would be the independent variable. (3) If the experimental uncertainties of one variable are smaller than the other, the one with the smallest uncertainty is often called the independent variable. (4) As a general rule independent variables are plotted on the horizontal axis of a graph, but this is not required if there's a good reason to do it otherwise. Some common statistical packages for computers can only deal with situations where one variable is assumed error-free, and all the experimental error is in the other one. They cavalierly refer to the error-free variable as the independent variable. But in real science, there's always some experimental error in all values, including those we "set" in advance to particular values. Virtual image. The point(s) from which light rays converge as they emerge from a lens or mirror. The rays do not actually pass through each image point. [One and only one ray, the one which passes through the center of the lens, does pass through the image point.] See: real image. Virtual object. The point(s) to which light rays converge as they enter a lens. The rays do not actually pass through each object point. [One and only one ray, the one which passes through the center of the lens, does pass through the object point.] See: real object. Weight. The size of the external force required to keep a body at rest in its frame of reference. Elementary textbooks almost universally define weight to be "the size of the gravitational force on a body." This would be fine if they would only consistently stick to that definition. But, no, they later speak of weightless astronauts, loss of weight of a body immersed in a liquid, etc. The student who is really thinking about this is confused. Some books then tie themselves in verbal knots trying to explain (and defend) why they use the word inconsistently. Our definition has the virtue of being consistent with all of these uses of the word. In the special case of a body supported near the earth's surface, where the acceleration due to gravity is g, the weight happens to have size mg. So this definition gives the same size for the weight as the more common definition. This definition is consistent with the statement: "The astronauts in the orbiting spacecraft were in a weightless condition." This is because they and their spacecraft have the same acceleration, and in their frame of reference (the spacecraft) no force is needed to keep them at the same position relative to their spacecraft. They and their spacecraft are both falling at the same rate. The gravitational force on the astronauts is still mg (though g is about 12% smaller at an altitude of 400 km than it is at the surface of the earth. It is not zero). This definition is consistent with statements about the "loss of weight" of a body immersed in a liquid (due to the buoyant force). The "weight" meant here is the external force (not counting the buoyant force) required to support the body in equilibrium in the liquid. Why? Students often ask questions with the word why in them. "Why is the sky blue?" "Why do objects fall to earth?" "Why are there no bodies with negative mass?" "Why is the universe lawful?" What sort of answers does one desire to such a question? What sort of answers can science give? If you want some mystical, ultimate or absolute answer, you won't get it from science. Philosophers of science point out that science doesn't answer why questions, it only answers how questions. Science doesn't explain; science describes. Science postulates models to describe how some part of nature behaves, then tests and refines that model till it works as well as we can measure (as evidenced by repeated, skeptical testing). Science doesn’t provide ultimate or absolute answers, but only proximate (good enough) answers. Science can't find absolute truth, but it can expose errors and identify things which aren't so, thereby narrowing the region in which truth may reside. In the process, science has produced more reliable knowledge than any other branch of human thought. Work. The amount of energy transferred to or from a body or system as a result of forces acting upon the body, causing displacement of the body or parts of it. More specifically the work done by a particular force is the product of the displacement of the body and the component of the force in the direction of the displacement. A force acting perpendicular to the body's displacement does no work on the body. A force acting upon a body which undergoes no displacement does no work on that body. Also, it follows that if there's no motion of a body or any part of the body, nothing did work on the body. See: kinetic energy. Zeroth law of thermodynamics. If body A is in thermal equilibrium with body B, and B is also in thermal equilibrium with C, then A is necessarily in thermal equilibrium with C. This is equivalent to saying that thermal equilibrium obeys a transitive mathematical relation. Since we define equality of temperature as the condition of thermal equilibrium, then this law is necessary for the complete definition of temperature. It ensures that if a thermometer (body B) indicates that body A and C give the same thermometer reading, then they bodies A and C are at the same temperature. RELATED REFERENCESArons, Arnold B. A Guide to Introductory Physics Teaching. Wiley, 1990. Arons, Arnold B. Teaching Introductory Physics. Wiley, 1997. Iona, Mario. The Physics Teacher. Regular column titled "Would You believe?" which documents and discusses errors and misleading statements in physics textbooks. Swartz, Clifford and Thomas Miner. Teaching Introductory Physics, A Sourcebook. American Institute of Physics, 1997. Symbols, Units and Nomenclature in Physics. From Document U.I.P 11 (S.U.N. 65-3) International Union of Pure and Applied Physics. Contained in the Handbook of Chemistry and Physics, The Chemical Rubber Company. Warren, J. W. The Teaching of Physics. Butterworth's, 1965, 1969. Return to Donald Simanek's page.
http://www.lhup.edu/~dsimanek/glossary.htm
13
90
FUNDAMENTAL LOGIC CIRCUITS Upon completing this chapter, you should be able to do the following: Identify general logic conditions, logic states, logic levels, and positive and negative logic as these terms and characteristics apply to the inputs and outputs of fundamental logic circuits. Indentify the following logic circuit gates and interpret and solve the associated Truth Tables: In chapter 1 you learned that the two digits of the binary number system can be represented by the state or condition of electrical or electronic devices. A binary 1 can be represented by a switch that is closed, a lamp that is lit, or a transistor that is conducting. Conversely, a binary 0 would be represented by the same devices in the opposite state: the switch open, the lamp off, or the transistor in cut-off. In this chapter you will study the four basic logic gates that make up the foundation for digital equipment. You will see the types of logic that are used in equipment to accomplish the desired results. This chapter includes an introduction to Boolean algebra, the logic mathematics system used with digital equipment. Certain Boolean expressions are used in explanation of the basic logic gates, and their expressions will be used as each logic gate is introduced. Logic is defined as the science of reasoning. In other words, it is the development of a reasonable or logical conclusion based on known information. Consider the following example: If it is true that all Navy ships are gray and the USS Lincoln is a Navy ship, then you would reach the logical conclusion that the USS Lincoln is gray. To reach a logical conclusion, you must assume the qualifying statement is a condition of truth. For each statement there is also a corresponding false condition. The statement "USS Lincoln is a Navy ship" is true; therefore, the statement "USS Lincoln is not a Navy ship" is false. There are no in-between conditions. Computers operate on the principle of logic and use the TRUE and FALSE logic conditions of a logical statement to make a programmed decision. The conditions of a statement can be represented by symbols (variables); for instance, the statement "Today is payday" might be represented by the symbol P. If today actually is payday, then P is TRUE. If today is not payday, then P is FALSE. As you can see, a statement has two conditions. In computers, these two conditions are represented by electronic circuits operating in two LOGIC STATES. These logic states are 0 (zero) and 1 (one). Respectively, 0 and 1 represent the FALSE and TRUE conditions of a statement. When the TRUE and FALSE conditions are converted to electrical signals, they are referred to as LOGIC LEVELS called HIGH and LOW. The 1 state might be represented by the presence of an electrical signal (HIGH), while the 0 state might be represented by the absence of an electrical signal (LOW). If the statement "Today is payday" is FALSE, then the statement "Today is NOT payday" must be TRUE. This is called the COMPLEMENT of the original statement. In the case of computer math, complement is defined as the opposite or negative form of the original statement or variable. If today were payday, then the statement "Today is not payday" would be FALSE. The complement is shown by placing a bar, or VINCULUM, over the statement symbol (in this case, P). This variable is spoken as NOT P. Table 2-1 shows this concept and the relationship with logic states and logic levels. Table 2-1. - Relationship of Digital Logic Concepts and Terms Example 1: Assume today is payday In some cases, more than one variable is used in a single expression. For example, the expression ABCD is spoken "A AND B AND NOT C AND D." POSITIVE AND NEGATIVE LOGIC To this point, we have been dealing with one type of LOGIC POLARITY, positive. Let's further define logic polarity and expand to cover in more detail the differences between positive and negative logic. Logic polarity is the type of voltage used to represent the logic 1 state of a statement. We have determined that the two logic states can be represented by electrical signals. Any two distinct voltages may be used. For instance, a positive voltage can represent the 1 state, and a negative voltage can represent the 0 state. The opposite is also true. Logic circuits are generally divided into two broad classes according to their polarity - positive logic and negative logic. The voltage levels used and a statement indicating the use of positive or negative logic will usually be specified on logic diagrams supplied by manufacturers. In practice, many variations of logic polarity are used; for example, from a high-positive to a low-positive voltage, or from positive to ground; or from a high-negative to a low-negative voltage, or from negative to ground. A brief discussion of the two general classes of logic polarity is presented in the following paragraphs. Positive logic is defined as follows: If the signal that activates the circuit (the 1 state) has a voltage level that is more POSITIVE than the 0 state, then the logic polarity is considered to be POSITIVE. Table 2-2 shows the manner in which positive logic may be used. Table 2-2. - Examples of Positive Logic As you can see, in positive logic the 1 state is at a more positive voltage level than the 0 state. As you might suspect, negative logic is the opposite of positive logic and is defined as follows: If the signal that activates the circuit (the 1 state) has a voltage level that is more NEGATIVE than the 0 state, then the logic polarity is considered to be NEGATIVE. Table 2-3 shows the manner in which negative logic may be used. Table 2-3. - Examples of Negative Logic NOTE: The logic level LOW now represents the 1 state. This is because the 1 state voltage is more negative than the 0 state. In the examples shown for negative logic, you notice that the voltage for the logic 1 state is more negative with respect to the logic 0 state voltage. This holds true in example 1 where both voltages are positive. In this case, it may be easier for you to think of the TRUE condition as being less positive than the FALSE condition. Either way, the end result is negative logic. The use of positive or negative logic for digital equipment is a choice to be made by design engineers. The difficulty for the technician in this area is limited to understanding the type of logic being used and keeping it in mind when troubleshooting. NOTE: UNLESS OTHERWISE NOTED, THE REMAINDER OF THIS BOOK WILL DEAL ONLY WITH POSITIVE LOGIC. LOGIC INPUTS AND OUTPUTS As you study logic circuits, you will see a variety of symbols (variables) used to represent the inputs and outputs. The purpose of these symbols is to let you know what inputs are required for the desired output. If the symbol A is shown as an input to a logic device, then the logic level that represents A must be HIGH to activate the logic device. That is, it must satisfy the input requirements of the logic device before the logic device will issue the TRUE output. Look at view A of figure 2-1. The symbol X represents the input. As long as the switch is open, the lamp is not lit. The open switch represents the logic 0 state of variable X. Figure 2-1. - Logic switch: A. Logic 0 state; B. Logic 1 state. Closing the switch (view B), represents the logic 1 state of X. Closing the switch completes the circuit causing the lamp to light. The 1 state of X satisfied the input requirement and the circuit therefore produced the desired output (logic HIGH); current was applied to the lamp causing it to light. If you consider the lamp as the output of a logic device, then the same conditions exist. The TRUE (1 state) output of the logic device is to have the lamp lit. If the lamp is not lit, then the output of the logic device is FALSE (0 state). As you study logic circuits, it is important that you remember the state (1 or 0) of the inputs and outputs. So far in this chapter, we have discussed the two conditions of logical statements, the logic states representing these two conditions, logic levels and associated electrical signals and positive and negative logic. We are now ready to proceed with individual logic device operations. These make up the majority of computer circuitry. As each of the logic devices are presented, a chart called a TRUTH TABLE will be used to illustrate all possible input and corresponding output combinations. Truth Tables are particularly helpful in understanding a logic device and for showing the differences between devices. The logic operations you will study in this chapter are the AND, OR, NOT, NAND, and NOR. The devices that accomplish these operations are called logic gates, or more informally, <emphasis type="i.GIF">gates. These gates are the foundation for all digital equipment. They are the "decision-making" circuits of computers and other types of digital equipment. By making decisions, we mean that certain conditions must exist to produce the desired output. In studying each gate, we will introduce various mathematical SYMBOLS known as BOOLEAN ALGEBRA expressions. These expressions are nothing more than descriptions of the input requirements necessary to activate the circuit and the resultant circuit output. THE AND GATE The AND gate is a logic circuit that requires all inputs to be TRUE at the same time in order for the output to be TRUE.
http://www.tpub.com/neets/book13/54.htm
13
82
||This article's introduction may be too long for its overall length. (May 2013)| In a variety of computer languages, the function atan2 is the arctangent function with two arguments. The purpose of using two arguments instead of one, is to gather information of the signs of the inputs in order to return the appropriate quadrant of the computed angle, which is not possible for the single-argument arctangent function. For any real number (e.g., floating point) arguments x and y not both equal to zero, atan2(y, x) is the angle in radians between the positive x-axis of a plane and the point given by the coordinates (x, y) on it. The angle is positive for counter-clockwise angles (upper half-plane, y > 0), and negative for clockwise angles (lower half-plane, y < 0). The atan2 function was first introduced in computer programming languages, but now it is also common in other fields of science and engineering. It dates back at least as far as the FORTRAN programming language and is currently found in C's math.h standard library, the Java Math library, .NET's System.Math (usable from C#, VB.NET, etc.) the Python math module, the Ruby Math module, and elsewhere. Many scripting languages, such as Perl, include the C-style atan2 function. In mathematical terms, atan2 computes the principal value of the argument function applied to the complex number x+iy. That is, atan2(y, x) = Pr arg(x+iy) = Arg(x+iy). The argument can be changed by 2π (corresponding to a complete turn around the origin) without making any difference to the angle, but to define atan2 uniquely one uses the principal value in the range (−π, π]. That is, −π < atan2(y, x) ≤ π. The atan2 function is useful in many applications involving vectors in Euclidean space, such as finding the direction from one point to another. A principal use is in computer graphics rotations, for converting rotation matrix representations into Euler angles. In some computer programming languages, the order of the parameters is reversed (for example, in some spreadsheets) or a different name is used for the function (for example, Mathematica uses ArcTan[x,y]). On scientific calculators the function can often be calculated as the angle given when (x, y) is converted from rectangular coordinates to polar coordinates. The one-argument arctangent function can not distinguish between diametrically opposite directions. For example, the anticlockwise angle from the x-axis to the vector (1, 1), calculated in the usual way as arctan(1/1), is π/4 (radians), or 45°. However, the angle between the x-axis and the vector (−1, −1) appears, by the same method, to be arctan(−1/−1), again π/4, even though the answer clearly should be −3π/4, or −135°. The atan2 function takes into account the signs of both vector components, and places the angle in the correct quadrant. Thus, atan2(1, 1) = π/4 and atan2(−1, −1) = −3π/4. Additionally, the ordinary arctangent method breaks down when required to produce an angle of ±π/2 (or ±90°). For example, an attempt to find the angle between the x-axis and the vector (0, 1) requires evaluation of arctan(1/0), which fails on division by zero. In contrast, atan2(1, 0) gives the correct answer of π/2. When calculations are performed manually, the necessary quadrant corrections and exception handling can be done by inspection, but in computer programs it is extremely useful to have a single function that always gives an unambiguous correct result. Definition and computation In terms of the standard arctan function, whose range is (−π/2, π/2), it can be expressed as follows: - This produces results in the range (−π, π], which can be mapped to [0, 2π) by adding 2π to negative results. - Traditionally, atan2(0, 0) is undefined. - The C function atan2, and most other computer implementations, are designed to reduce the effort of transforming cartesian to polar coordinates and so always define atan2(0, 0). On implementations without signed zero, or when given positive zero arguments, it is normally defined as 0. It will always return a value in the range [−π, π] rather than raising an error or returning a NaN (Not a Number). - Systems supporting symbolic mathematics normally return an undefined value for atan2(0,0) or otherwise signal that an abnormal condition has arisen. - For systems implementing signed zero, infinities, or Not a Number (for example, IEEE floating point), it is common to implement reasonable extensions which may extend the range of values produced to include −π and −0. These also may return NaN or raise an exception when given a NaN argument. The free math library FDLIBM (Freely Distributable LIBM) available from netlib has source code showing how it implements atan2 including handling the various IEEE exceptional values. For systems without a hardware multiplier the function atan2 can be implemented in a numerically reliable manner by the CORDIC method. Thus implementations of atan(y) will probably choose to compute atan2(y,1). The following expression derived from the tangent half-angle formula can also be used to define atan2. This expression may be more suited for symbolic use than the definition above. However it is unsuitable for floating point computational use as it is undefined for y = 0, x < 0 and may overflow near these regions. The formula gives an NaN or raises an error for atan2(0, 0), but this is not an issue since atan2(0, 0) is not defined. A variant of the last formula is sometimes used in high precision computation. This avoids overflow but is always undefined when y = 0: Variations and notation - In Common Lisp, where optional arguments exist, the atan function allows one to optionally supply the x coordinate: (atan y x). - In Mathematica, the form ArcTan[x, y] is used where the one parameter form supplies the normal arctangent. Mathematica classifies ArcTan[0, 0] as an indeterminate expression. - In Microsoft Excel, the atan2 function has the two arguments reversed. OpenOffice.org Calc also reverses the arguments, as does the Google Spreadsheets <atan2 function. - In the Intel Architecture assembler code, atan2 is known as the FPATAN (floating-point partial arctangent) instruction. It can deal with infinities and results lie in the closed interval [−π, π], e.g. atan2(∞, x) = +π. Particularly, FPATAN is defined when both arguments are zero: - atan2(+0, +0) = +0 - atan2(+0, −0) = +|π - atan2(−0, +0) = −0 - atan2(−0, −0) = −π - This definition is related to the concept of signed zero, i.e. - On most TI graphing calculators (excluding the TI-85 and TI-86), the equivalent function is called R►Pθ and has the arguments reversed. - In mathematical writings other than source code, such as in books and articles, the notations Arctan and Tan-1 have been utilized; these are uppercase variants of the regular arctan and tan-1. This usage is consistent with the complex argument#notation, such that Atan(y, x) = Arg(x+iy). The diagram alongside shows values of atan2 at selected points on the unit circle. The values, in radians, are shown inside the circle. The diagram uses the standard mathematical convention that angles increase anticlockwise (counterclockwise), and zero is to the right. Note that the order of arguments is reversed; the function atan2(y, x) computes the angle corresponding to the point (x, y). The diagram below shows values of atan2 for points on the unit circle. On the x-axis is the complex angle of the points, starting from 0 ( point (1,0) ) and going anticlockwise (counterclockwise), through points: - (0, 1) with complex angle π/2 (in radians), - (−1, 0) with complex angle π, - (0, −1) with complex angle 3π/2, to (1, 0) with complex angle 0 = (2nπ mod 2π). The diagrams below show 3D view of respectively atan2(y, x) and arctan(y/x) over a region of the plane. Note that for atan2, rays emanating from the origin have constant values, but for atan lines passing through the origin have constant values. For x > 0, the two diagrams give identical values. As the function atan2 is a function of two variables, it has two partial derivatives. At points where these derivatives exist, atan2 is, except for a constant, equal to arctan(y/x). hence: Informally representing the function atan2 as the angle function (which is only defined up to a constant) yields the following formula for the total derivative: While the function atan2 is discontinuous along the negative y-axis, reflecting the fact that angle cannot be continuously defined, this derivative is continuously defined except at the origin, reflecting the fact that infinitesimal (and indeed local) changes in angle can be defined everywhere except the origin. Integrating this derivative along a path gives the total change in angle over the path, and integrating over a closed loop gives the winding number. In the language of differential geometry, this derivative is a one-form, and it is closed (its derivative is zero) but not exact (it is not the derivative of a 0-form, i.e., a function), and in fact it generates the first de Rham cohomology of the punctured plane. This is the most basic example of such a form, and it is fundamental in differential geometry. The partial derivatives of atan2 do not contain trigonometric functions, making it particularly useful in many applications (e.g. embedded systems) where trigonometric functions can be expensive to evaluate.[clarification needed] - Organick, Elliott I. (1966). A FORTRAN IV Primer. Addison-Wesley. p. 42. "Some processors also offer the library function called ATAN2, a function of two arguments (opposite and adjacent)." - The Linux Programmer's Manual says: - "CLHS: Function ASIN, ACOS, ATAN". LispWorks. - "Atan2 Method". Microsoft. - "Function list". Google. - IA-32 Intel Architecture Software Developer’s Manual. Volume 2A: Instruction Set Reference, A-M, 2004. - Computation of the external argument by Wolf Jung
http://en.wikipedia.org/wiki/Atan2
13
64
1. NATURAL FERTILIZER AND HEALTHY SOIL Natural fertilizer, often called compost, provides the food needed for a plant to grow after a seed has germinated in the soil. This food consists of plant nutrients. The most important, called macro, of these nutrients are nitrogen (N), phosphorus (P) and potassium (K). There are also many other nutrients needed by plants in small quantities, e.g. copper (Cu) manganese (Mn), magnesium (Mg), iron (Fe), sulphur (S) and others. These are called micronutrients or trace elements. Natural fertilizer also provides organic matter called humus for the soil. Humus is a black or brown spongy or jelly-like substance. It helps the soil have a good structure to hold water and air. One of the best natural fertilizers is mature compost because it feeds the soil with humus and plant nutrients. The growing plants take their nutrients from the top layers of the soil where their roots grow. Plant nutrients are lost from the soil when they are washed down (leached) below the top soil, or when the top soil is eroded. Plant nutrients are also lost with the crops when these are harvested. When the surface of the land is broken up for farming, the soil is often eroded: it is blown away by the wind or washed away by rain and floods. The soil also loses much of its carbon content as carbon dioxide (CO2) into the atmosphere, thus contributing to climate change. The soil that is left becomes poor in plant nutrients so the crops do not grow well and give a good yield. But if the plant nutrients and carbon are returned to the soil, it can continue to grow good crops as well as contribute to slowing down the negative impacts of climate change. Farmers can replace the lost plant nutrients by using natural fertilizers, such as compost. Natural fertilizer comes from the breakdown and decomposition of animal wastes and plants; for example, cow dung, sheep, goat or chicken droppings, urine, decomposed weeds and other plant or animal remains, such as waste from preparing food. The fertilizer can also be made of chemicals in a factory. Farmers have to buy this type of fertilizer from the market or through farmers’ service cooperatives. Therefore fertilizers are of two types: A. Natural fertilizer, including compost B. Man-made chemical fertilizer Throughout the world there are many options for replacing the plant nutrients lost from soil, but, in our case and in many other parts of Sub-Saharan Africa where most of the agriculture is done by smallholder farmers, the best option is compost produced by human labour using the natural materials available to farmers and others, such as students and youth, from their surroundings. Good quality compost can also be made from organic household wastes in urban areas and be used to grow healthy vegetables in gardens at home or by school environment club or youth group members. The soil is a complex mixture of the following: Non-living materials—solid particles from broken down rocks, air and water; Living organisms—bacteria, fungi, many small and very small (microscopic) animals, plants such as algae and plant roots; and The decayed and decomposed remains of living organisms—humus. The solid particles provide the basic structure or skeleton of the soil. Generally three types of particles are recognized: sand, silt and clay. Sandy soil is rough to feel because it is made of large grains. Sandy soil does not hold much water. Silty soil is finer to feel than sandy soil. When it is moist, the particles stick together in crumbs. Clay soil is very soft when wet as the particles are very small. They stick together even when the soil is dry and hard. Clay particles swell when they get wet and water cannot pass through easily, Natural soils consist of combinations of sand, silt and clay. The sand holds some plant nutrients and helps provide good drainage of excess water from a soil. Silt holds more plant nutrients and helps to hold water in the soil. Clay holds even more plant nutrients and water, but has little air. Loam or loamy soil contains a balance of sand, silt and clay. In a healthy soil, all these particles are coated with a layer of humus. This gives the soil its brown colour, good smell and structure. The humus also holds and helps keep plant nutrients and water in the soil. The nutrients in the humus are released slowly and constantly, as long as there is enough moisture. Humus helps the soil particles stick together, but they do not fit tightly together. A loam soil with good humus has spaces or pores between the particles for water and air to get into and move through the soil. Humus is important for a soil because it: Holds moisture, like a sponge, Holds nutrients for plant nutrition and growth of micro-organisms particularly fungi and bacteria, Acts as a buffer against changes in pH of the soil, Allows air to get into the soil, and Contributes to a good soil structure. A healthy soil contains 12 or more percent of carbon, i.e. organic matter. The organic matter is the source of energy for the bacteria, fungi and other organisms in the soil. These organisms break down dead plant and animal remains releasing carbon dioxide, water and mineral salts, including nitrates, phosphates, etc. which are the nutrients for growing plants. Some of the water in the soil is held tightly by the soil particles, especially by the clay, and plants cannot use it. Other water moves more freely through the pores, and this is available for plant growth. Humus acts as a water reservoir for the plant roots and other organisms in the soil. It can hold up to six times its own weight in water. The air in the soil has much more carbon dioxide than the above ground atmosphere. This is because the plant roots and the other living things in the soil produce carbon dioxide when they ‘breath’, but the movement of air in the soil is slow and the carbon dioxide does not move out into the air as fast as from animals living above ground. There are many organisms that live in the soil, The bacteria and fungi are particularly important in breaking down plant and animal waste materials, and making plant nutrients available. Many fungi and bacteria also help in transferring nutrients from the soil to the roots of plants. The larger animals, worms, beetles, etc. help break down dead things into a condition that the bacteria and fungi can digest. These animals also move and mix the soil, sometimes dramatically like earthworms and termites. In a healthy soil, there is a very large mixed population of all these organisms. They each have a role to play in keeping the soil healthy, and hence, also the crops that grow on the soil. Pests are not usually a problem in a healthy soil. Thus, healthy soil produces healthy food. 2. THE CHARACTER OF COMPOST 2.1 Why is compost important? Compost is important because it: 1) Contains the main plant nutrients – nitrogen (N), phosphorus (P) and potassium (K), often written as NPK; 2) Improves the organic matter in the soil by providing humus; 3) Helps the soil hold both water and air for plants; and 4) Makes micronutrients and trace elements available to plants. 2.2 What can compost be used for? Because compost is made up of humus, it can be used for improving soil as follows: 1) It provides plant nutrients that are released throughout the growing season. The plant nutrients are released when organic matter decomposes and is changed in to humus. The plant nutrients dissolve in the water in the soil and are taken in by the roots of the crops. 2) It improves soil structure so that plant roots can easily reach down into the soil. In sandy soil the humus makes the sand particles stick together. This reduces the size of the spaces (pores) so that water stays longer in the soil. In clay soils, the humus surrounds the clay particles making more spaces (pores) in the soil so the root systems of plants can reach the water and nutrients that they need, and air can also move through the soil. Therefore, because heavy clay soils become lighter, and sandy soils become heavier, soil that has had compost added to it is easier to work, i.e. to plough and dig. 3) It improves the moisture-holding capacity of soil. The humus is a dark brown or black soft spongy or jelly-like substance that holds water and plant nutrients. One kilogram of humus can hold up to six litres of water. In dry times, soil with good humus in it can hold water longer than soil with little humus. In Ethiopia, crops grown on soil with compost can go on growing for two weeks longer after the rains have stopped than crops grown on soil given chemical fertilizer. When it rains, water easily gets into the soil instead of running off over the surface. Water gets into the subsoil and down to the water table so that runoff and thus flooding is reduced, and springs do not dry up in the dry season. 4) It helps to control weeds, pests and diseases. When weeds are used to make compost, the high temperature of the compost-making process kills many, but not all, of the weed seeds. Even the noxious weed, Parthenium, has most of its seeds killed when it is made into compost following the instructions given in this booklet. Fertile soil produces strong plants able to resist pests and diseases. When crop residues are used to make compost, many pests and diseases cannot survive to infect the next season’s crops. 5) It helps the soil resist erosion by wind and water. This is because: Water can enter the soil better and this can stop showers building up into a flood. This also reduces splash and sheet erosion. Soil held together with humus cannot be blown away so easily by wind. 6) Compost helps farmers improve the productivity of their land and their income. It is made without having to pay cash or borrow money, i.e. farmers do not have to take credit and get into debt like they do for taking chemical fertilizer. But, to make and use compost properly farmers, either individually or working in groups, have to work hard. 2.3 What is needed to make compost? 2.3.1 Plant materials, both dry and green 1) Weeds, grasses and any other plant materials cut from inside and around fields, in clearing paths, in weeding, etc. 2) Wastes from cleaning grain, cooking and cleaning the house and compound, making food and different drinks, particularly coffee, tea, home-made beer, etc. 3) Crop residues: stems, leaves, straw and chaff of all field crops—both big and small—cereals, pulses, oil crops, horticultural crops and spices, from threshing grounds and from fields after harvesting. 4) Garden wastes—old leaves, dead flowers, hedge trimmings, grass cuttings, etc. 5) Dry grass, hay and straw left over from feeding and bedding animals. Animal bedding is very useful because it has been mixed with the urine and droppings of the animals. 6) Dropped leaves and stems from almost any trees and bushes except plants which have tough leaves, or leaves and stems with a strong smell or liquid when crushed, like Eucalyptus, Australian Acacia, Euphorbia, etc. However, we have found farmers making good quality compost including stems of Euphorbia. 7) Stems of cactus, such as prickly pear, can be used if they are crushed or chopped up and spread in each layer in small quantities. They are also a good source of moisture for making compost in dry areas. When the compost is made correctly, the spines are destroyed. Enough water is needed to wet all the materials and keep them moist, but the materials should not be made too wet so that they lack air and thus rot and smell bad. Both too little and too much water prevent good compost being made. Water does NOT need to be clean like drinking water. It can come from: Collected wastewater, e.g. from washing pots and pans, clothes, floors, etc. However, it should not contain detergents (washing powders and liquids such as Omo). Water can also be collected from ponds, dams, streams and rivers, particularly if men and boys are willing to do it. It is not fair to expect women and girls to collect all the water needed to make compost as they also have to collect all the water for cooking and drinking in the home. 2.3.3 Animal materials 1) Dung and droppings from all types of domestic animals, including from horses, mules, donkeys and chicken, from night pens and shelters, or collected from fields. 2) Chicken droppings are important to include because they are rich in nitrogen. 3) Urine from cattle and people: Catch urine in a container from animals when they wake up and start moving around in the morning. Provide a container—like an old clay pot or plastic jerrycan—in the toilet or latrine where people can pass or put their urine. Night soil (human faeces): almost all human parasites and other disease organisms in human faeces are killed by the high temperatures when good compost is made. 2.3.4 Compost making aids – ‘farmers’ friends’ Micro-organisms (fungi and bacteria) and smaller animals (many types of worms, including earthworms, nematodes, beetles and other insects, etc) turn waste materials into mature compost. These are found naturally in good fertile soils like those from forests and woodlands, old animal dung, and old compost. Adding any of these to new compost helps in the decomposition process. Adding compost making aids is like adding yeast to the dough to make bread. The farmers in Ethiopia call all these materials the ‘spices’ to make good compost. Including dry materials in the compost, e.g. old leaves and stalks, provides space for air to circulate inside the compost. Air is needed because the soil organisms need oxygen. Decomposition of organic wastes produces heat. Compost needs to be kept hot and moist so the plant and animal materials can be broken down quickly and thoroughly. Heat destroys most of the weed seeds, fungal diseases, pests and parasites. 2.4 The contributions of the different compost-making materials 2.4.1 Have a good balance of carbon and nitrogen Both carbon and nitrogen are needed to make good compost. They are used by the micro-organisms to grow and multiply, and to get energy. Some of the carbon is converted to carbon dioxide, and this escapes to the atmosphere. Most of it remains and becomes humus, and the nitrogen becomes nitrates. Methane is not produced if there is a good supply of air to the organisms carrying out the decomposition process. Materials with good nitrogen content help in making good compost, but they should be less than the carbon-containing materials. Carbon-containing materials should always be more than those containing high nitrogen, i.e. a ratio of 2:1 upto 3:1 is the best. A good balance of carbon and nitrogen helps make good compost. Table 1 gives the average carbon-to-nitrogen balance for some types of composting materials. Table 1: The average nitrogen and carbon content of some selected composting materials Type of composting material Nitrogen content (%) Carbon-to-Nitrogen ratio (C:N) Blood (e.g. from slaughter houses) 10–14 3:1 Bone 3 8:1 Cow manure fresh 2–3 20:1 Horn 12 not found Horse and donkey manure 3–8 25:1 Horse and donkey manure with litter / bedding 2-3 60:1 Manure from animal pens = farmyard manure (FYM) 2-3 14:1 Manure in general 2–3 18:1 Poultry manure fresh 3–6 10–12:1 Poultry manure with litter/bedding 2–3 18:1 Sheep manure 3–4 not found Urine 15–18 0.8:1 Barley/wheat straw and residues from threshing floors 0.4–0.6 80–100:1 Fallen leaves 0.4 45:1 Maize/sorghum stalks and leaves 0.7–0.8 55–70:1 Young grass hay 4 12:1 Alfalfa hay 3–4 10:1 Grass clippings (fresh or wilted) 2–3 12–25:1 Straw from peas and beans 1.5 not found Vegetable stems and trimmings 2–3 12–20:1 Coffee grounds 2–3 20:1 Compost 3–4 7–10:1 Source: compiled from Dalzell et al. (1979), NAS (1981), Minnich et al. (1979), Cyber-north (2004); Cooperband (2002), Ravishankar et al. (2001) With Nitrogen as 1, high figures for the carbon in the carbon-to-nitrogen column indicate high carbon content. These items are good for making compost. Items with low carbon content, like urine and chicken manure, are useful to provide nitrogen. But they must be mixed with materials with high carbon content. 1) When there is enough air and moisture in the compost, nitrogen-containing materials are broken down and the nitrogen is changed to nitrates that can be used by plants. 2) When there is too much water and little air, the nitrogen is changed into ammonia. This is a gas that escapes from the compost, and gives the compost a bad smell. 3) When there is a bad smell, the compost needs to be turned over bringing the top to the bottom and the bottom to the top, and mixing in more dry materials and some good soil. This puts more air into the compost, which stops the process of making ammonia so that proper mature compost can be made. 2.4.2 The contributions of dry and green plant materials Dry materials give structure to the compost making process; they provide space for air to circulate so that the micro-organisms can be active and produce heat. Green plant materials provide moisture for compost making; they give water and nutrients to the micro-organisms so that they multiply and break down the organic materials into humus. Box 1: Examples of some plant materials for making compost Crop straws absorb water without changing their physical structure. They are good for keeping air in the compost, but they do not mix easily with other materials and decompose slowly. Grass and other green materials have usually lost water and wilted before they are put into the compost. They can hold moisture longer in a compost pit than in a compost heap. 2.4.3 The importance of good water/moisture and air balance Water is essential for compost preparation. 1) Sufficient moisture helps for quicker decomposition because it is essential for soil organism to be active. 2) Excess water causes rotting of the materials and creates a bad smell. 3) Without enough moisture the decomposition process slows down and the materials will not be changed into compost. This shows that moisture and air must be balanced to make good compost. Farmers quickly learn how to judge the amount of water needed to be added in making compost. 2.4.4 The importance of air Compost should have sufficient air. 1) When there is sufficient air, oxygen enters the compost heap. When there is enough oxygen, special bacteria can convert nitrogen into nitrate, the materials are decomposed properly and there is a good smell. 2) If there is not enough air and too much water, the nitrogen is converted into ammonia. The ammonia escapes into the air removing nitrogen from the compost and making it smell bad. 3) If there is excess air and too little water, the materials dry up and do not decompose to become compost. 2.4.5 Quality compost with animal dung and urine 1) Animal dung contains water, nitrogen, phosphorous and potassium, as well as micro-nutrients. 2) Animal dung and urine are very necessary to prepare good quality compost—urine especially is high in potassium and nitrogen. 3) Both dung and urine help to produce a high temperature so that the materials decompose into compost quickly and easily. 4) Urine, in particular, accelerates decomposition. 2.4.6 Important compost making aids Compost making aids are farmers’ friends as they help speed up the process of decomposition. They are like the yeast in making bread and beer or wine, or the salt and spices in making tasty food. They include: Good top soil and old compost. These contain bacteria, fungi and many small animals to work on breaking down the materials into mature compost. Ashes from all types of plant materials except charcoal are good to mix in because they contain phosphorous, potassium, and many micro-nutrients like zinc, iron and magnesium. Ash is important in making compost, but it should be added in small amounts or mixed with the dry plant materials. If ash is added in a large amount, it stops the movement of compost-making organisms, water and air and finally prevents the decomposition process from continuing. Heat is produced by the action of bacteria and fungi on the plant and animal materials, and their activity keeps the compost hot. Covering compost with a black plastic sheet can also absorb the heat from the sun and stop it escaping so that the compost making process goes fast. Large (Macro) soil organisms: Look for larger organisms, like earthworms, beetles, and chafer beetle grubs in old moist compost, old animal dung or good top soil and add these to the compost making materials as they are. Do not dry or sieve them as this will kill them. Composting facilitators / promoters are important because: They provide key bacteria, fungi and micro-organisms to make the compost They provide nutrients for the organisms in the soil so they remain in a good condition and reproduce rapidly They help speed up the composting process and ensure that good quality compost is produced. Methods for using compost making aids include any or all of the following: Make a mixture of dry top soil, old compost and ashes. Then crush it and, if possible, sieve it so it is like salt or a fine powder. Mix the powder with fresh composting materials, particularly with dry or green plant materials like grass and/or straw, and put this in layers between the other materials. Do NOT put the compost making aid material as a layer by itself. It needs to be mixed with the other materials so it can accelerate the compost making process. Ash is good as it contains minerals, BUT if you put a high quantity in one layer, the minerals are strongly concentrated and can slow down or stop the micro-organisms making compost. Mix the ash with the dry or green plant materials. 2.4.7 How micro and macro-organisms work The production of good quality mature compost depends on the number and types of micro and macro-organisms living in the soil. These are living organisms that require air, moisture and heat in the compost heap so that they can live, work and multiply / reproduce. Compost materials supply food and energy (starch, soluble sugars, carbohydrates, amino acids) for the micro-organisms. In the presence of air supplying oxygen, and moisture, the micro-organisms convert the available food into humus and soluble plant nutrients, which stay in the compost heap, and carbon dioxide, which diffuses out into the atmosphere. However, most of the carbon in compost materials stays in the humus and only a small amount leaves as carbon dioxide. As the micro-organisms grow and multiply, they produce heat which speeds up the compost making process. Heat also kills many weed seeds, pests, parasites and diseases from the fields, and in the animal dung and human faeces. The heat ensures that healthy mature compost is produced. 3. CONDITIONS TO BE FULFILLED BEFORE PREPARING COMPOST 3.1 The Indore and Bangalore Methods There are two main methods for preparing compost. One is called the Indore method and the other is the Bangalore method. The names come from districts in India where the compost making processes were first developed. The difference between the two methods is in the way the materials are put together and in the time taken for completing the compost heap or filling the pit. The Indore method can be prepared either in a pit or as a heap or pile above the ground, but its preparation must be completed in less than a week. The complete Indore method uses a sequence of three layers of materials: dry plant materials, green plant materials, animal manure and some soil. It is suitable for times and places where there are plenty of materials to make the mature compost, and labour, such as in a school or with a farmers’ group, to put them together quickly. The NADEP method is like the Indore method except that the tank is filled in one or two days and it always includes animal manure. This method needs a lot of work, but it produces very high quality mature compost without any more labour after the NADEP tank has been filled and sealed. The Bangalore method is prepared in areas where composting materials and water availability are limited, and labour is also limited. The materials can be collected over a week or more, and then the new layers are made until either the heap is about 1 to 1.5 metre tall, or the pit is full. The Bangalore method usually uses only two layers of materials: dry plant materials and green plant materials. It is very suitable for making compost from household wastes, or in farms where there are no domestic animals. Both the Indore and the Bangalore methods can include animal manure as an additional layer. Including animal manure ensures the best quality compost. But good quality compost can be made even without animal manure, i.e. just from plant materials and kitchen wastes. Preparing compost needs dedication. Therefore: 1) Decide when and what method to use to make the compost. 2) Look out and search for composting materials that can be collected and carried to the compost-making place. 3) Find out who will provide the water, and how. 4) Decide if it is possible to collect and use urine. 5) Be prepared to give time and effort, i.e. work hard, to prepare good quality compost. 6) Set a target for the area of farmland or garden to be covered by the mature compost. Adding mature compost to a small field or even a small area in a field and then planting it with a high value crop can show good economic returns in a year. 7) Collecting composting materials, layering or piling, and mixing are the main tasks during compost making. These need physical and mental preparation to overcome the burden of hard work, but it is only for a short time. 8) Seeing good crops grow well and getting good yields from well composted soil is very rewarding. In Ethiopia, and other places with warm to hot climates, mature compost can be prepared in three to four months. In colder places, decomposition to make mature compost can take from six months to a year. Box 2: Mamma Yohannesu and finger millet Mamma Yohannesu was an old woman living with her grandson. She had a very small field of about 10 x 25 m near her house. The soil was rather sandy. She managed to make about 5 sacks of compost which she put on this field when her neighbour ploughed it for her. She planted the field with finger millet. In most of the field she scattered the seed, but in a plot of 5 x 5 m she brought and planted young finger millet seedlings she had grown in her house garden. She got a fantastic yield for her efforts — equivalent to 2.8 tonnes/ha for the directly sown finger millet and 7.6 tonnes/ha from the transplanted seedlings. 3.2 Points to remember when making compost in a heap 1) It is good to make a heap in the rainy season when there is plenty of green plants, such as weeds, getting water is easy or the materials are naturally wet, or where there is plenty of water available. 2) The compost heap will be on the ground with its base in a shallow trench to hold the foundation layer. 3) It should be in a place where it can be protected and get covered with leaves or straw or plastic during the rains so that the materials are not damaged or washed away. 4) It can be made under the shade of a tree and covered with wide leaves or plastic in order to protect the heap from high winds. 5) After the rains stop, keep the heap covered and check regularly to see if the moisture and temperature are correct, as described later in the section on follow-up. 3.3 Points to remember when making compost in a pit 1) This is good anytime of the year where moisture is limiting, and is the best way to make compost after the rains have finished and during the dry season. 2) Prepare and dig the pit, or better still, a series of 3 pits, when the land is moist and easier to dig, and/or when there is a gap between other farming activities. 3) If possible, make the compost immediately at the end of the rainy season while there are plenty of green and moist plant materials. 4) In the dry season, make the pit near a place where water can be added, e.g. next to the home compound where waste water and urine can be thrown on the compost materials, or near a water point, e.g. a pond, or near a stream where animals come to drink. 5) Mark the place of the pit with a ring of stones or a small fence so people and animals do not fall into it accidentally. 4. INDORE COMPOST PREPARATION METHODS The Indore compost preparation method is done over a short period of time and uses a systematic way of putting the materials together, i.e. in layers. This method is most suitable for the rainy season when there are plenty of materials, e.g. weeds, to put into the compost. However, the place for making compost should be well-drained and easy to protect from floods and excess rain. The compost can be made either by piling in a heap or heaps, or in a pit or pits. This method can also be used by vegetable growers when they should clean their fields after harvesting their crops and before the next crop is planted. The residues left after the crop is finished and harvested, such as stems and leaves from pumpkins, potatoes, tomatoes, chilli peppers and courgettes/zucchini, leaves and stalks from cabbage, etc. and any damaged crops that cannot be sold or eaten, should be collected together and organized for making compost. Using these left over materials for making compost prevents the pests and diseases in the old plants and diseased fruits from going on living so that the next crop does not get so easily damaged. 4.1 Indore Piling Method 4.1.1 Selecting the site The following factors need to be considered: 1) The site should be accessible for receiving the materials, including water and/or urine, and for frequent watching/monitoring and follow up. 2) The site should be protected from strong sunlight and wind, e.g. in the shade of a tree, or on the west or north side of a building or wall. 3) The site should be protected from high rainfall and flooding. 4.1.2 Preparing the site 1) Clear the site of stones, weeds and grasses, but do not cut down any young trees. Instead, put the site so it is in the shade of the tree(s). The tree(s) will grow, provide shade and protect the compost heap. 2) Mark out the area for the compost heap. A minimum area is a square of 1.25 m x 1.25 m. If it is smaller than this, the heap will dry out quickly so compost will not be made properly. The area can be larger, up to 3 m x 2.5 m. 3) Dig a shallow trench in the ground the same size as the compost heap. Make the trench about 20-25 cm deep. The bottom and sides of the trench should be smeared with water or a mixture of cow dung and water. This seals the pit so that moisture with nutrients do not leak out of the base of the compost heap. 4) The foundation layer of compost making materials is placed in the trench or pit. 5) The trench holds moisture during the dry season. 6) Materials are added in layers to make the heap, described in more detail below. 4.1.3 The layers in making the compost heap The foundation layer 1) Dry plant materials, e.g. strong straw and stalks of maize and sorghum, which are thick and long, are used for the foundation. These need to be broken into short lengths (about 10-15 cm long). The stalks can be crushed, and then chopped. If possible let cattle lie down or sleep on them for one night. Walking cattle over the stems and stalks, as in threshing, is a good way of breaking up the stalks. 2) Spread the dry materials evenly over the bottom of the trench to make a layer 15-25 cm thick, as deep as a hand. Then sprinkle water with a watering can or scatter water evenly by hand over the dry plant materials so they are moist, but not wet. 3) The foundation layer provides ventilation for air to circulate, and excess water to drain out of the upper layers. The three basic layers 1) The compost heap is built up of layers of materials, like in a big sandwich. The basic sequence is: Layer 1: A layer of dry plant materials, or mixture of dry plant materials with compost making aids (spices) like good soil, manure and/or some ashes. The layer should be 20-25 cm thick, i.e. as deep as a hand. The compost making aids (spices) can be mixed with the water to make slurry. Water or slurry should be scattered by hand or sprinkled with a watering can evenly over this layer making it moist but not soaking wet. Layer 2: A layer of moist (green) plant materials, either fresh or wilted, e.g. weeds or grass cuttings, plants from clearing a pathway, stems and leaves left over from harvesting vegetables, damaged fruits and vegetables. Leafy branches from woody plants can also be used as long as the materials are chopped up. The layer should be 20-25 cm thick. Water should NOT be sprinkled or scattered over this layer. Layer 3: A layer of animal manure collected from fresh or dried cow dung, horse, mule or donkey manure, sheep, goat or chicken droppings. The animal manure can be mixed with soil, old compost and some ashes to make a layer 5-10 cm thick. If there is only a small quantity of animal manure, it is best to mix it with water to make slurry, and then spread it over as a thin layer 1-2 cm thick. 2) Layers are added to the heap in the sequence, Layer 1, Layer 2, Layer 3, until the heap is about 1-1.5 metres tall. The layers should be thicker in the middle than at the sides so the heap becomes dome-shaped, which helps rainwater entering the pit. 3) Layers 1 and 2 are essential to make good compost, but Layer 3 can be left out if there is a shortage or absence of animal manure. 4) Place one or more ventilation and/or testing sticks vertically in the compost heap remembering to have the stick long enough to stick out of the top of the heap. Ventilation and testing sticks are used to check if the decomposition process is going well, or not. A hollow stick of bamboo grass (Arundo donax) or bamboo makes a good ventilation stick as it allows carbon dioxide to diffuse out of and oxygen to diffuse into the heap. A testing stick is needed as it can be taken out at regular intervals to check on the progress of decomposition in the heap. 4.1.4 Making the covering layer The finished heap needs to be protected from drying out, and also from animals pushing into it and disturbing it. 1) The covering layer can be made of wet mud mixed with grass or straw, with or without cow dung, or wide leaves of pumpkin, banana, fig trees, etc, or from plastic, or any combination of these materials, i.e. mud plaster covered with leaves or plastic, or leaves covered with plastic. 2) The cover should be put on both the sides and the top of the heap with only the ventilation stick coming out of the top. 3) The covering layer: Prevents rain water from getting into the heap and damaging the compost making process; and Helps keep heat inside the compost making heap. See the section on follow-up for how to check on the heat and moisture in the compost. 4) The compost heap can also be protected by putting a ring of stones or making a small fence around it. 5) The compost heap is best left untouched until there is mature compost inside it, or it can be turned over, as described for the pit method. If the compost is turned over, water should be sprinkled over the layers to keep all the materials moist. It is not necessary to try and keep the original different layers when turning over the compost—it is best if all the materials can be well mixed together, then added in layers about 20-25 cm thick and water sprinkled or splashed over them. 6) A mature compost heap is about half the height of the original heap, and the inside is full of dark brown or black substance, humus, which smells good. When the compost is mature, it should be very difficult to see the original materials. 7) This mature compost can be used immediately in the field, or it can be covered and stored until the growing season. When it is put in the field, it should be covered quickly by soil so the sun and wind do not damage it, and the nitrogen does not escape to the atmosphere. Therefore, it is best to put compost on the field just before ploughing, or at the same time as sowing the crop. For row planted crops, it can be put in the furrow with the seed. For transplanted crops, it can be put in the hole with the seedling. 4.2 Indore Pit Method The Indore pit method is best done at the end of the rainy season or during the dry season. It is important to make the pits where there is sufficient water available; for example, by a pond, small dam, run-off from a road or track, etc. Women and girls should not be expected to carry water just for making compost. Waste water and urine from people and animals can be collected in old containers, and used in making compost. The main reasons for making pit compost in the dry season are as follows: 1) After harvesting is complete, farmers can arrange their time to make compost including working together in groups according to their local traditions to share their labour. 2) If farmers have a biogas digester, the bioslurry from the digester can be used to make high quality compost at any time of the year, but particularly during the dry season. 3) The pits can be filled 2 or more times so that a large quantity of compost can be made over the duration of the dry season. 4) If pit compost is made during the rainy season or in very wet areas, water can get into the bottom of the pit. This will rot the materials producing a bad smell and poor quality compost. In wet areas it is better to make compost through the piling method. 5) Poor quality compost will not be productive and this can discourage farmers and others from trying to make better quality compost. 6) It is very important to have a frequent follow up and control of the balance of air and water in the materials being decomposed to make compost. 4.2.1 Selecting and preparing the site 1) The site should be accessible for receiving the composting materials, including water and urine, and for frequent watching/monitoring and follow up. 2) The site should be protected from strong sunlight and wind. It should be in a protected area, for example, in the shade of a tree, or on the west or north side of a building or wall. 3) The pit or pits should be marked or have a ring of stones or small fence around it or them so that people and animals do not fall into it or them. 4) The site should NOT be where floods can come. 4.2.2 Digging the pits The aim is to have a series of 3 pits, one next to the other. The minimum size for each pit should be: 1.0 metre deep (pits should NOT be deeper than 1 metre) 1.0-1.5 metres wide 1.0-1.5 metres long (or longer) The pits can be dug as they are needed – see Table 2 showing the flow of work. If a farmer and his/her family feel they have limited capacity, they can dig 1 pit of the above size, but then they should probably make compost using the Bangalore method (see next section). Smaller pits usually dry out too quickly so good quality compost is not be made, and this will discourage the farmer from making and using compost. Pits deeper than 1 metre can be cold at the bottom and the micro-organisms cannot get enough oxygen to work properly. If compost is prepared by a group of farmers or students in an environment club or youth group, they can make a wider and/or longer pit that can supply all the families in the group. It also depends on the amount of composting materials they are able to collect and bring to the pit. See also the sections on Trench Composting and the NADEP method, which are more suitable for compost making by groups, and where large quantities of composting materials are readily available. After the pit or pits are dug, they should be checked carefully to make sure there is no leakage of water into the pit which could spoil the compost making process. 4.2.3 Layers for filling the pit Before the pit is filled, the bottom and sides should be covered with a mixture of animal dung and water – slurry. If animal dung is not available, a mixture of top soil and water can be used. This plaster helps seal the sides of the pit so that moisture stays in the compost making materials. The foundation layer 1) Dry plant materials, e.g. strong straw and stalks of maize and sorghum, which are thick and long, are used for the foundation. These need to be broken into short lengths (about 10-15 cm long). The stalks can be crushed, and then chopped. If possible let cattle lie down or sleep on them for one or two nights. Walking cattle over the stems and stalks, as in threshing, is a good way of breaking up the stalks. The cattle will add their dung and urine to the stalks making them more valuable for making compost. 2) Spread the dry materials evenly over the bottom of the pit to make a layer 20-25 cm thick. Then sprinkle water with a watering can or scatter water evenly by hand over the dry plant materials so they are moist, but not wet. 3) This is a very important layer in making pit compost as it makes sure that air can circulate through to the bottom of the pit. The three basic layers 1) The compost pit is filled with layers of materials, like in a big sandwich. The basic sequence is: Layer 1: A layer of dry plant materials, or mixture of dry plant materials with compost making aids (spices) like good soil, manure and/or some ashes. The layer should be 20-25 cm thick, i.e. about the depth of a hand at the sides. The compost making aids can be mixed with the water to make slurry. Water or slurry should be scattered by hand or sprinkled with a watering can evenly over this layer. The layer should be moist but not soaked. Layer 2: A layer of moist (green) plant materials, either fresh or wilted, e.g. weeds or grass, plants from clearing a pathway, stems and leaves left over from harvesting vegetables, damaged fruits and vegetables. Leafy branches from woody plants can also be used as long as the materials are chopped up. The layer should be 20-25 cm thick at the sides. Water should NOT be sprinkled or scattered over this layer. Layer 3: A layer of animal manure collected from fresh or dried cow dung, horse, mule or donkey manure, sheep, goat or chicken droppings. The animal manure can be mixed with soil, old compost and some ashes to make a layer 5-10 cm thick. If there is only a small quantity of animal manure, it is best to make slurry by mixing the dung in water, and then spread it over as a thin layer 1-2 cm thick. 2) Layers are added to the pit in the sequence, Layer 1, Layer 2, Layer 3, until the pit is full to the top with the middle about 30-50 cm higher than the sides. The layers should be thicker in the middle than at the sides so the top becomes dome-shaped. Layers 1 and 2 are essential to make mature compost, but Layer 3 can be left out if there is a shortage or absence of animal manure. 3) Place one or more ventilation and/or testing sticks vertically in the compost pit remembering to have the stick long enough to stick out of the top of the pit. Ventilation and testing sticks are used to check if the decomposition process is going well, or not. A hollow stick of bamboo grass (Arundo donax) or bamboo makes a good ventilation stick as it allows oxygen to diffuse into the pit. A solid stick is important as it can be taken out every few days to check on the progress of decomposition of the materials in the pit. 4.2.4 Covering the pit After the pit is full of compost making materials, the top should be covered with wet mud mixed with grass and/or cow dung, and/or wide leaves such as those of banana, pumpkin or even from fig trees, and/or plastic so the moisture stays inside the pit, and rain does not get in to damage the decomposition process. NOTE: Mark the place and/or cover the top with branches so animals and people do not tread on the cover and break it. The progress in making compost should be checked regularly by taking out the ventilation or testing stick and checking it for heat, smell and moisture. The inside of the pit should be hot and moist with a good smell. The top of the pit will also sink down as the composting materials get decomposed. 4.2.5 Turning over and making compost throughout the dry season 1) In warm climates, about one month after the pit has been filled the compost can be turned over and checked. 2) In cold climates, the compost making materials take two or more months to start to decompose well. The rate of decomposition can be checked through the use of the testing stick. 3) A good farmer or gardener will soon learn how to judge the best time to turn over her or his compost. Table 2 and show the sequence of activities for digging, filling and turning over compost in the 3-pit system. This system spreads out the work so that a farmer who wants to have a good quantity of quality compost can plan and prepare it before the growing season. Table 2: Sequence of activities for digging, filling and turning over compost in pits Pits Pit A Pit B Pit C Storing or using mature compost 1st month Dig pit A Fill pit A with compost materials 2nd month Fill pit A for a second time Dig pit B Put compost materials from pit A into pit B 3rd month Fill pit A for a third time Put compost materials from pit A into pit B Dig pit C Put compost materials from pit B into pit C 4th month Put compost materials from pit A into pit B Put compost materials from pit B into pit C Use mature compost from pit C or store it in pit C 5th month Put compost materials from pit B into pit C Use mature compost from pit C or store it 6th month Use mature compost from pit C or store it The sequence for making a good quantity of quality compost using the 3-pit method is as follows: The cover is removed and all the materials are turned over into the second pit, i.e. from pit A to pit B. It is important to put the materials from the top of pit A into the bottom of pit B, and so on with the materials from the bottom of pit A getting to the top of pit B. The materials can be mixed together, but they should be added in layers 20-25 cm thick and sprinkled with water to make sure they stay moist, but NOT soaked. At the same time check that the moisture and air balance is correct. If the materials are too dry, more green materials should be added and/or water should be sprinkled over them as they are put into the pit. If the materials are too wet, add more dry plant material in layers between the wet decomposing materials. If the compost making is going well, you will find that the materials from pit A do not completely fill pit B. You will also see the white threads of fungi and many kinds of small organisms, including termites that are living on and decomposing the composting materials. The composting materials will have started to turn dark brown or black. Pit A can now be filled for a second time with a new lot of composting materials as described above. Both pits should be closed with a layer of mud or leaves and/or plastic, as described above. Again after about another month, the cover over pit B can be opened and the materials turned into pit C, and the cover to pit A removed and the materials in pit A turned over into pit B. At the same time check that the moisture and air balance in the materials is good. If the compost-making process is going well, after two months the materials in pit B should be well decomposed, i.e. dark brown or black, with a good smell and these can be turned into Pit C. Pit A can now be filled for a third time with new composting materials, if they are available. After a third or fourth month in warm climates, it should be possible to find fully matured compost in pit C. The material should look like good dark soil without any of the original materials visible. However, pit C may be only half full after the first lot from pit B is put into it. In fact, pit C can store all the compost until it is needed. Pit C should always be covered to prevent rain getting in, nutrients getting out and the compost being spoiled. Or, the mature compost can be taken out, piled up and covered to be stored in a dry, cool and shady place until it is needed. It must be covered so that it does not blow away or the nutrients get destroyed by sunlight or rain. The mature compost can be taken out and put on the field just before ploughing, or mixed into the soil immediately by hand. The compost must be covered with soil so that the nutrients, particularly nitrates, are not destroyed by the sunlight. With enough moisture and heat, compost making is fast under Ethiopian conditions. Four months after filling the first pit, it is possible to have compost to use on the land. By the sixth month, a good farmer can accumulate 3 lots of compost, enough for half a hectare of land. 5. BANGALORE COMPOST PREPARATION METHODS The Bangalore method is not as precise or as demanding of hard work as the Indore method because the composting materials are added as they become available. It is highly suitable where there is a shortage of both composting materials and water. It is also the best method for making compost from household waste and/or vegetable gardens. The Bangalore method can be used for both piling and pit methods, but the pit method is preferred in Ethiopia. This is because the pit holds moisture better than the heap, and the wind cannot blow away the materials so easily in the dry season. However, inside house compounds, the piling method is also convenient. 5.1 Bangalore Piling Method 5.1.1 Selecting and preparing the site 1) Select a site where it is easy to add materials, e.g. inside a house compound. 2) The site should be sheltered from rain and wind. The best is in the shade of a tree, or on the north or west side of a building or wall to be sheltered from sun for most of the day. 3) Clear the site of stones and weeds, but leave trees to grow and give shade. 4) Mark out the length and width of the heap; for example, 1-2 m x 1-1.5 m and dig a trench 20-25 cm deep, i.e. about the depth of a hand, to be at the bottom of the heap to hold the foundation layer and stop it drying out in the dry season. 5.1.2 Making the heap The foundation layer 1) Prepare the foundation layer from dry plant materials such as old straw, stalks of maize and sorghum, or old cabbage stalks, rose and hedge trimmings, etc from gardens. 2) Use straw and maize and sorghum stalks as livestock bedding for one or two nights so that they get broken up and mixed with urine and dung. 3) Collect the materials and put them into the trench to make an even layer 15-25 cm deep. Sprinkle or scatter some water over the layer so it is moist but not wet. 4) Cover the layer with a little soil and some large leaves from banana, or pumpkin, or a fig tree, or even a sheet of plastic to prevent the materials drying out or being blown away. Making the other layers 1) During the week, collect materials and put them in a convenient container such as an old jerry can, or next to the compost heap. Dry plant materials can be mixed with fresh moist ones, or the two types of plant material can be kept separately. The farmers in Ethiopia prefer to mix the dry and moist plant materials together. These materials can come from spoiled animal feed where animals have been stall fed, from cleaning the house and compound, clearing paths, weeding, stems and leaves after harvesting vegetables, preparing vegetables for making food, damaged fruits and vegetables, etc. 2) The dry materials can be used as livestock bedding for one or two nights so they collect urine and dung, and the animals can walk over them to break them up. 3) At the end of a week, remove the large leaves or plastic covering the top of the foundation layer so they can be used again, or leave the leaves to become part of the compost if they are too damaged to be used again. 4) Make a mixture of compost making aids (spices) like good soil, old manure and/or some ashes as a fine powder. Mix these with the dry plant material, or with the mixture of dry and moist plant material. 5) First add the layer of dry plant materials that have been used as bedding with the animal urine and dung in them, and then put the layer of green plant materials on top, OR add a layer of the mixed dry and moist plant materials. Make each layer 15-25 cm thick with the middle thicker than at the sides so that the heap becomes dome-shaped 6) Cover each layer with a thin layer of animal manure or soil and/or big leaves like those from banana or pumpkin or fig trees so that the composting materials do not dry out. Animal manure can be left out if it is not easy to get, but the soil is important. 7) Repeat this process each week, or whenever there are enough materials collected to make one or two new layers, until the heap is about 1-1.5 metres tall. Make the centre of the heap higher than the sides so that the heap has a dome shape. 8) Put a testing and/or ventilation stick into the middle of the heap. 5.1.3 Making the covering layer The finished heap needs to be protected from drying out, and also from animals pushing into it and disturbing it. 1) The covering layer can be made of wet mud mixed with grass or straw, with or without cow dung, or wide leaves of pumpkin, banana, fig trees, etc, or from plastic, or any combination of these materials, i.e. mud plaster covered with leaves or plastic, or leaves covered with plastic. 2) The cover should be put on both the sides and the top of the heap with only the ventilation stick coming out of the top. The covering layer: Prevents rain water from getting into the heap and damaging the compost making process; and Helps keep heat inside the compost making heap. See the unit on follow-up for how to check on the heat and moisture in the compost. 3) The compost heap can also be protected by making a small fence around it from branches. 4) The compost heap is best left untouched until there is mature compost inside it, or it can be turned over, as described for the pit method. If the compost is turned over, water should be sprinkled over the layers to keep all the materials moist. It is not necessary to make the different layers when turning over the compost – all the materials can be well mixed together, then added in layers about 20-25 cm thick and water sprinkled or splashed over them. Where the climate is warm, mature compost can be ready in about 4 months. 5.2 Bangalore Pit Method 5.2.1 Selecting and preparing the site 1) It should be in a place that is easy to take the materials, including water and urine, to the pit as well as for watching and follow up. 2) The site should be protected from strong sunlight and wind. It can thus be, for example, in the shade of a tree, or on the west or north side of a building or wall. 3) The pit should be marked or have a ring of stones or a fence of branches around it so that people and animals do not fall into it. 4) The site should protected and away from where floods can come. 5.2.2 Digging the pit 1) The minimum size of a pit should be: 1 metre deep (pits should NOT be deeper than 1 metre) 1-2 metres wide 1-2 metres long 2) If a farmer and his/her family, or urban household, can collect more compost making materials, the pit can be made longer, but NOT either wider or deeper. 3) If a pit is deeper than 1 metre, the material at the bottom does not get decomposed because many of the micro-organisms cannot live so deep down as the oxygen they need will not reach them. 4) Before any materials are put into the pit, the sides and bottom should be checked to make sure no water is leaking into the pit. 5) The bottom and sides should be plastered with a mixture of fresh animal dung and water, or top soil and water, to seal the surface so that the moisture in the compost materials is kept in the pit. 5.2.3 Filling the pit The foundation layer 1) Dry plant materials, e.g. strong straw, stalks of maize and sorghum or tall grasses, as well as rose and hedge clippings from gardens, are used for the foundation. These need to be crushed or chopped or broken into short lengths (about 10-15 cm). If possible, let the domestic animals walk over them and sleep on them for one or two nights so the materials get broken up and mixed with urine and dung. 2) Spread the materials evenly over the bottom of the pit to make a layer 15-25 cm thick. Then sprinkle/scatter water evenly so that the materials are moist, but not wet. 3) This is a very important layer in making compost in a pit as it makes sure that air can circulate to the bottom. 4) Cover the foundation layer with large leaves, e.g. those of pumpkin, banana, fig leaves etc, and/or plastic to keep the material moist. Putting the other layers into the pit 1) Each week, collect materials and put them in a container such as an old jerry can or pile them next to the compost heap. Mix the fresh moist materials with dry ones. These materials can come from spoiled animal feed, old animal bedding, from cleaning the house and compound, preparing vegetables for food, clearing paths, weeding, stems and leaves after harvesting vegetables, damaged fruits and vegetables, etc. 2) If the farmer has a biogas digester, the bioslurry can be collected also to be mixed with the other materials. The bioslurry is an excellent compost making aid. 3) At the end of a week, remove the large leaves or plastic covering the top of the foundation layer so they can be used again, or leave the leaves to become part of the compost if they are too damaged to be used again. 4) Make a mixture of compost making aids (spices) like good soil, manure and/or some ashes as a fine powder. Mix these with the dry plant material, or with the mixture of dry and moist plant material. 5) Add the prepared composting materials in layers. Each layer is 15-25 cm thick at the edge and a bit thicker in the middle so that the heap becomes dome-shaped 6) Cover each of the layers with a thin layer of soil and/or big leaves like those from banana or pumpkin or fig trees so that the composting materials do not dry out. 7) Repeat this process each week, or whenever there are enough materials collected to make one or two new layers, until the pit is full. Make the centre of the layers in the pit higher than the sides so that the top has a dome shape. 8) Put a ventilation and/or testing stick into the middle of the pit. 5.2.4 Making the covering layer The pit full of composting materials needs to be protected from drying out, and also from animals disturbing it. 1) The covering layer should be made of mud plaster, with or without cow dung, with only the ventilation stick coming out of the top. It is then covered with wide leaves of pumpkin, banana, fig trees, etc. or plastic can also be used to protect the top of the pit. The leaves or plastic: Prevent rainwater from getting inside the pit. Help keep heat inside the pit. 2) The compost pit can be left untouched until there is mature compost inside it, or it can be turned over and checked for the progress in decomposition. The top of the pit will sink down as the compost materials get decomposed. However, if the compost is turned over, it will lose moisture. So, it is best only to turn compost over if there is enough water and/or urine to make it moist again while it is being turned over. 3) The process for turning over the compost from the pit is the same as that described for Indore pit method. 4) In a warm climate, mature compost can be made in 3-4 months. In colder climates, decomposition can take six months or a year. 5) The mature compost can be left covered and stored in the pit until it is needed for adding to the soil. 6. TRENCH COMPOSTING Trench composting is suitable for groups. These can be groups of farming households, environmental clubs in schools, or youth group members who agree to work together to collect the materials, make the compost, and then share it among the members, or use it in their common garden. Trench composting is good for mixed groups of men and women because men can do the heavy work of digging the trench and turning the compost materials, while the women can contribute materials and help carry the mature compost to where it is needed, including their own fields and gardens. 1) Plan to make compost in a trench at the end of the rainy season when there is plenty of suitable compost making materials available from clearing paths and compounds, etc, so that the mature compost is ready for the next growing season, or for making nursery beds for raising tree and vegetable seedlings. 2) The trench should be made at a convenient place for the members of the group to bring the collected materials; for example, near a path used by the members. It should also be under the shade of a tree to protect the people working to make the compost from getting too hot in the sun. In some communities, the people making and turning the compost do it in the evening or even at night to prevent getting overheated. The strong smell that can come from decomposing materials is also reduced in the cool of the evening or night. 3) A good size for the trench is as follows: 0.5-1.0 metre deep, but not deeper than 1 metre 1.0-1.5 metres wide 2.5 metres or longer if there are plenty of materials, even up to 10 metres long. 6.1 How to prepare and fill the trench 1) Mark out the size of the trench. Note: the length of the trench can be increased as more materials become available. 2) Dig down to 0.5–1 m and put the soil in a pile to one side of the trench. The soil is added in layers between the composting materials and/or used to cover the top of the filled trench. 3) The group members collect and bring materials from their houses, home compounds, cleaning paths, weeding, after harvesting vegetables, etc., if possible after having animals lie on the materials for one or more nights 4) Look for and collect dry plant materials, such as long grasses and matting, sorghum and maize stalks to make a foundation layer. Get them broken up by animals walking and lying down on them. Put these materials as a bottom layer in the trench. Sprinkle/scatter water over the dry materials until they are moist, but not wet. 5) Mix all the collected materials together. Some or all of the following are suitable: cleanings from the house and from cooking, crop residues—leaves and stalks from harvesting and clearing/cleaning vegetable fields, chicken and goat and sheep droppings, cow dung, add some old compost as a starter (like yeast). 6) Put the mixed materials in the trench in layers, each 20-25 cm thick at the sides and thicker in the middle. 7) Sprinkle/scatter water, or urine mixed with water over the materials, until they are moist but not wet. Any type of wastewater, even after washing clothes with hard washing soap, (but NOT with powder or liquid detergents, such as omo) can be used for wetting. 8) Cover this layer with a thin layer of the soil taken from digging the trench. 9) Repeat this process of making layers until the trench is full and the middle is 25-50 cm higher than the surrounding ground. 10) Mix the soil that was dug out from the trench with straw, grasses, cow dung and water, in the same way as making a mud plaster to cover the walls of a house. Use this mixture to make a complete cover and seal over the top of the compost materials. Regularly check the mud plaster cover and repair cracks or other types of damage. 11) Put ventilation/testing sticks in the compost materials at about 1 metre intervals. 12) Finally, cover the trench with thatching grass or wide leaves of banana or pumpkin or fig trees, and/or plastic to keep in the moisture and heat. 13) Regular use the testing sticks to monitor the progress of compost making. 14) The covered trench can be left untouched for 3-4 months, or longer, by which time mature compost will have been made. Evidence of compost making is seen first in the heat, and then in the fact that the heap shrinks down, and weeds start to grow on the mud cover. 6.2 How to turn over trench compost 1) After 2 months, the cover can be opened and the compost turned over. At the same time, the moisture balance and decomposition process can be checked. However, if the decomposition process is not complete, the compost will have a strong smell. It is best to do the turning over process during the early morning, or in the evening, or even at night to reduce the smell. 2) Turning over the compost is best done by digging out all the compost from about 50 cm at one end of the trench, and putting this outside the trench. Then the remaining compost is turned over in units of 50 cm into the trench so the materials at the top are put at the bottom and those at the bottom are put on top. The materials taken out from the first 50 cm strip are put back at the end of the trench. This is the same method as that used in double digging a vegetable bed. 3) If the materials are not well decomposed and too dry, water can be sprinkled over the materials as they are turned over. 4) If the materials are too wet and smelling of ammonia, more dry materials can be added in the turning over process. 5) After turning over, the materials need to be covered and sealed as described above. 7. THE NADEP METHOD The NADEP method is a development of the Indore method. It is named after its inventor, Narayanrao Pandaripade who was also called ‘Nadepkaka’. This system is suitable for organized groups, such as growers associations, cooperatives, school environment clubs and youth groups, to make large quantities of high quality compost which they can use for themselves, or sell, for example, to vegetable growers, where high levels of nutrients are required. The NADEP method produces nitrogen-rich compost using the least possible amount of cow dung. The system also minimizes problems from pests and diseases, and does not pollute the surrounding area because the compost is made in a closed tank. After the NADEP tank has been filled with compost making materials and sealed, it is left for the decomposition process to take place without any further handling until the mature compost is required. 7.1 Selecting and preparing the site for the NADEP tank The NADEP method uses a permanently built tank of mud or clay bricks, or cement blockettes. It is, therefore, important to choose the permanent site for the tank with care. 1) Select a site where there is enough space to collect the materials together before filling the tank, and where mature compost can be stored until it is needed. 2) The site needs to be near a source of water. 3) The site should be sheltered from rain, floods and wind. The best is in the shade of a tree, or on the north or west side of a building or wall. However, air must be able to circulate all round the tank. 7.2 Building the NADEP tank 1) The inside dimensions of the tank are as follows: Length 3 metres Width 2 metres Height 1 metre 2) This size of tank requires 120-150 blockettes or mud bricks, four 50-kg bags of cement, and 2 boxes of sand. Five iron rods can be used to strengthen the floor, but they are not essential. 3) The building should be done by a properly qualified mason, i.e. someone who knows how to build such a structure. 4) The floor of the tank is made of bricks or blockettes laid on the ground and covered with a layer of cement. 5) Each of the 4 walls has 3 rows of holes or gaps between the bricks or blockettes, as shown in 6) After the tank is built, the walls and floor are covered with a light plaster of fresh cow dung mixed with water, and then the tank is left to dry out. 7.3 Filling the tank A NADEP tank is filled in one or two days of hard work. It has to be done by a team. Before filling the tank, the following materials must be collected together: 1) Dry and green plant materials—1400-1500 kg (or 14-15 sacks) are needed. Grass, hay or straw that is left over from feeding animals, or that has been damaged by rain, is very suitable. 2) Cow dung or partly dried bioslurry (the discharge from a biogas digester)—90-100 kg or 10 sacks. 3) Dried soil that has been collected from cattle pens, cleaning drains, paths, etc—1750 kg are needed. The soil should be sieved to remove old tins, plastic, glass, stones, etc. Soil that contains cattle urine makes it very productive in the compost making process. 4) Water – the amount varies with the season and the proportion of dry to green plant materials available. However, usually an equivalent amount to plant materials is needed, i.e. 140-150 litres. 5) If urine from cattle and/or people is available, it should be diluted in the proportion of 1 part urine for 10 parts water (1 jug of urine put into 10 jugs of water in a bucket). 6) Before starting to fill the tank, the sides and floor of the tank are thoroughly wetted with slurry made from fresh cow dung mixed into water. 7) The three layers used to fill the tank are as follows: First layer: use 100-150 kg of dry or mixed dry and green plant materials to make a layer 15-25 cm thick at the sides, and slightly thicker in the middle. Second layer: Mix 4 kg of cow dung or 10 kg of fresh biogas slurry in 25-50 litres of water and sprinkle or scatter it over the plant materials so they get completely moistened. Third layer: Cover the wet plant waste and cow dung or slurry layer with a layer of 50-60 kg of clean, sieved top soil. 8) Continue to fill the tank like a sandwich with these 3 layers put in sequence. Put more materials in the middle of the tank than around the sides. This will give a dome shape to the filled tank with the centre 30-50 cm higher than the sides 9) Cover the last layer of plant materials with a layer of soil 7-8 cm thick. Make a cow dung plaster and cover the soil so that there are no cracks showing. The top of the filled tank can also be covered with plastic, particularly to protect the compost making process during rainy seasons. 10) After the tank is filled, the progress of compost making can be tested by pushing a stick into the tank through the gaps in the wall. In a school or agricultural college, the students can monitor the changes in temperature by inserting a long thermometer, e.g. a soil thermometer. 11) As the materials decompose in the compost making process, the top of the filled tank will shrink down below the sides of the tank. 7.4 Following up on the NADEP compost making process It is important to keep the contents of the tank moist, i.e. with a moisture content of 15-20%. 1) Check the mud plaster seal on the top of the tank and fill any cracks that appear with cow dung plaster. 2) Pull out any weeds if they start to grow on the surface, as their root systems can damage the cover and take water out of the compost. 3) If the atmosphere gets very dry and hot, such as in the dry season, water can be sprayed through the gaps in the walls of the tank. The decomposition process for compost to be made takes about 3-4 months in a warm climate. When it is mature, it is dark brown, moist, and with a pleasant earthy smell: little can be seen of the original materials that were put into the tank. This mature compost should not be allowed to dry out or it will lose a lot of its nitrogen. However, before the compost is mixed to make nursery soil, it should be sieved. The sieved compost is used in making the soil for the nursery beds, and the remainder is kept and added to a new compost-making process. One NADEP tank of the size described here can produce about 30 tonnes or 300 quintals / sacks of high quality compost. 8. FOLLOWING-UP ON CONDITIONS IN THE COMPOST MAKING PROCESS When the compost pit has been filled or the piling of materials is complete, it should be checked regularly to make sure that there is enough but not too much moisture, and that it is getting hot, at least in the first 2-3 weeks. For compost made by piling materials on the ground: The stick can be inserted or pushed in horizontally between two layers about half way up the pile, or The stick can be pushed in vertically in the centre of the heap so it goes through all the layers. However, it is best if the stick or length of bamboo is place in the centre after the foundation layer has been laid and then the layering process is completed with the stick remaining vertical. The stick must be longer than the height of the heap so that it can be pulled out and examined. For compost made in a pit: The stick or length of bamboo is pushed in vertically through the whole layer, or put in place while the compost pit is being filled. The stick must be longer than the depth of the pit. 8.1 Checking heat and moisture One week after all the materials have been put in a heap or a pit, and it has been covered, remove the inserted stick and immediately place it on the back of your hand. 1) If the stick feels warm or hot and the smell is good, good decomposition has started. 2) If the stick feels cool or cold and there is little smell, the temperature is too low for good composition. This usually means that the materials are too dry, and some water and/or urine should be added. 3) If the stick is warm and wet, and there is a bad smell like ammonia, this indicates that there is too little air and too much water in the compost. The materials will be rotting and not making good compost. 8.2 Correcting the problems 8.2.1 If the materials are cool and dry 1) Lift up the top layers and put them to the side of the pit or heap. 2) Sprinkle water or cattle urine or cattle urine diluted with water on the material in the bottom. 3) The put back the material in layers of about 25 cm each sprinkling water or a mixture of water and urine over each. 4) Replace the testing stick and cover the heap or top of the pit with soil, leaves, plastic etc as described earlier. 8.2.3 If the materials are too wet 1) Collect some more dry plant materials and/or some old dry compost. Break up and mix the materials. If old dry compost is not available, use only dry plant materials. 2) Lift off the top of the heap or take out the top half of the materials from the pit and put them on one side. 3) Mix the new dry materials with the wet compost materials in the bottom. 4) Put back the materials from the side of the heap or pit. If these materials are wet and decaying, put in alternate layers of new dry plant materials with the wet materials. 5) If the top materials are moist and brown showing compost making has started, put them back as they are. 6) Put back the vertical testing stick. 7) Do NOT seal the top but make a new test after a week. If the stick is warm or hot and the smell is good, good compost making has started and the heap or top of the pit can be sealed and covered. Testing for heat and moisture should be done every week to 10 days until mature compost is made. 9. QUALITIES AND USE OF GOOD COMPOST Although the quality of compost is best evaluated through the growth and productivity of the plants grown on soil treated with it, it is possible to evaluate compost quality through seeing, touching and smelling. 1) Good quality compost is rich in plant nutrients and has a crumb-like structure, like broken up bread. 2) It is black or dark brown and easily holds moisture, i.e. water stays in it, and it does not dry out fast. 3) It has a good smell, like clean newly-ploughed soil, with a smell somewhat like that of lime or lemon. 9.1 Using compost Mature compost is best stored in a pit or heap until it is needed. If it is kept dry and covered, mature compost can be stored for several weeks without deteriorating. The stored mature compost should be kept in a sheltered place, e.g. under the shade of a tree or in a shed, and covered with leaves and/or soil and sticks to prevent the nutrients escaping to the atmosphere, and animals trampling on and damaging the mature compost heap. Mature compost should be taken to the field early in the morning or late in the afternoon. For crops sown by broadcasting, the compost should be spread equally over the field, or the part of the field chosen to be treated with compost. The compost should be ploughed in immediately to mix it with the soil and prevent loss of nutrients from exposure to the sun and wind. For row planted crops, e.g. maize, sorghum and vegetables, the compost can be put along the row with the seeds or seedlings. For trees, compost is put in the bottom of the planting hole and covered by some soil when the seedling is planted out. It can also be dug into the soil around the bottom of a tree seedling after it has been planted. Time and effort are needed to make good compost, so it is worthwhile to also put in time and effort into using it properly in the field. 9.2 Problems in using compost Improper use: The aim of preparing compost is to increase soil fertility and crop yields. Sometimes, a farmer will try and spread a small amount of compost over a wide area, and then be disappointed when he/she does not see any improvements to his/her soil and crops. If only a small amount of compost has been made, it is best to put it on a small area of land than spread it thinly over a wide area. Every farmer must aim to produce enough compost for her/his particular farmland to get a better yield (return). A guide on how much to add is given in the next section. Compost should not be left exposed to sun and wind on the surface of the soil, but buried immediately. Compost should not be added to empty fields. This is a waste of time and effort. By the time the crop gets sown, the compost will have lost a lot of its nutrients and make the farmer disappointed with his/her effort to make and use compost. Carrying compost: Compost is bulky. For best results a farmers needs to carry up to 30 to 70 sacks (3-7 tonnes) of compost to cover a 1-hectare field. This is between 7.5 and 17.5 sacks for 1/4 ha. Box 3: Farmers solving the problem of carrying compost The farmers of Adi Abo Mossa near Lake Hashengie in Ofla of Tigray and in Gimbichu of Oromiya Regions have solved this problem by using their donkeys to carry the sacks containing mature compost from the compost pit to the field. Other farmers, in Adi Nefas, have organized the making of compost to be near their fields so they only need to carry the mature compost a short distance. If farmers are seriously convinced about the usefulness of compost, they find their own ways to solve these problems. 10. SOME ENCOURAGING IMPROVEMENTS TO COMPOST MAKING POSSIBILITIES 10.1 Use of Bioslurry Bioslurry is the output or effluent produced from a biogas digester plant. Compost prepared from bioslurry can be made continuously throughout the year. The process spreads the labour required throughout the year as long as the farming family feeds the biogas digester regularly, preferably every day. An efficient farmer can make good quality mature compost from bioslurry in two months. The recommended design for a biogas digester plant in Ethiopia includes 2 compost pits to receive the bioslurry. These are filled alternatively, as described below. 10.1.1 Constructing the bioslurry compost pits Compost pits are an integral part of a biodigester plant. No plant is complete without them. Each biogas digester plant has a minimum of 2 compost pits. These should be constructed as follows: 1) Compost pit should be constructed near the outlet from the effluent tank so that the bioslurry can flow easily along narrow channels into the pits. 2) There should be a distance of 1 metre or more between the outlet from the effluent tank and the compost pits. 3) The channels should slope gently from the outlet from the digester down to the compost pits so that the bioslurry does not get held back in them. 4) The 2 pits are filled alternatively, i.e. when one compost pit is full, the bioslurry should be directed to the other compost pit. 5) Each compost pit should be 1 m deep. The width and length of the compost pits depend on the size of the biogas digester – see Table 3. Table 3: The dimensions of compost pits for different biodigester plant capacities Size of digester in m3 Dimension of each pit in cm Number of pits Total minimum volume of pits in m3 Width Length Depth 4 200 100 100 2 4 6 200 150 100 2 6 8 200 200 100 2 8 10 250 200 100 2 10 Source: Compact Course on Domestic Biogas 6) The distance between 2 pits should be about 50 cm, so that a person can walk between the two pits. 7) A rim of mud, about 10 cm high, has to be put around the top of the compost pits to stop rain water draining into them compost pits. 8) Compost pits must be shaded either by putting them under a tree and/or making a cover to avoid direct sunlight breaking down the nutrients so they escape as gases. 10.1.2 Filling the bioslurry compost pits The farmer and his/her family collect compost making materials such as straw, ash, animal bedding and kitchen waste to add to the bioslurry. This is put by the side of the compost making pit and then added as layers between the inflows of bioslurry. The process is as follows: 1) Clean an empty compost pit and make sure the walls and floor are without cracks. 2) Put a small layer of composting material such as straw to cover the bottom of the pit. 3) Allow one days’ outflow of effluent bioslurry to flow into the pit so that it completely covers the composting materials. 4) Add another layer of composting materials such as straw, leaves, weeds, grass, kitchen waste over the top of the bioslurry. 5) Repeat 3 and 4 until the pit is filled up, this usually takes about one month if the biogas digester has been fed every day. 6) Cover the top layer of bioslurry in the pit with covering materials such as large leaves and soil to make a seal. 7) Start the process in the next pit. 8) After another one month, the second pit should be completely filled and the first pit will have mature or nearly mature compost in it that can be taken out and stored in safe and covered storage place. 9) Using this system, farmer can produce 12 pits of high quality compost in a year, more than enough for one hectare. 10.2 Vermi composting Vermi compost is made using worms called Eisenia fetida. These are small red worms that eat organic materials mixed with some soil. The organic materials are ground up inside the worms’ digestive system and mixed with bacteria that also help with decomposing the organic materials. The worms deposit the resulting material as worm castes. The compost produced by the worms is highly concentrated and full of nutrients. It is especially good for vegetable and ornamental flower garden beds. It can also be in seedbeds instead of traditional potting soil. The process requires: 1) Container of any type and size, such as a plastic barrel or a small pit about 50 cm deep. 2) Bedding for the worms such as cardboard soaked in water for an hour or more and torn into long strips roughly 2 inches wide and laid on the bottom of the container or the pit. The bedding should hold moisture so it helps keep the atmosphere moist for the worms. Vermi-compost worms cannot rest on plastic or bare soil. 3) Food preparation scraps such as potato peel, chopped up fruits, vegetables, coffee grounds and even tea bags. Allow the food scraps to begin to decompose for about a week before giving them to the worms, so that bacteria and fungi start the decomposition process and make the food more easily taken in by the worms. Add new food scraps after the previous food is finished. 4) Red wriggling worms – gently lay them on top of the bedding and close the lid to the bin or put a cover over the pit so they can be in darkness. 5) Moisture – enough moisture is needed to wet the decomposable materials and then keep them soft and moist. 6) Materials NOT to be included in vermi-compost are meat, bones, dairy products and fatty or greasy foods. 11. AMOUNT OF COMPOST FOR ONE HECTARE Where soil fertility has been lost through many years of land degradation, 150-200 kg/ha of chemical fertilizer, such as Urea and DAP (diammonium phosphate) is recommended by the Ethiopian government. This can improve the yield in just one year, often dramatically, if there is enough rain or irrigation water. However, the effects of chemical fertilizer last for only one growing season, so it has to be added every year, and every year there has to be enough rain and/or irrigation water or the crop plants get burnt by the chemical fertilizer. One tonne of compost is not enough to get a similar increase in yield immediately in the year that it is added to the soil for the first time. This is because the amount of main plant nutrients (NPK) found in one tonne of compost is lower than that in 100 kg of chemical fertilizer. However, the effects of compost last for two or more growing seasons. In Europe where the soils are cold and there is much rain, the general guideline is that 20–25 tonnes of compost are needed to replace 100-150 kg of chemical fertilizer. The range is because the nutrient content of compost depends on the materials used to make the compost. Compost made only with plant materials usually has a lower nutrient content than compost made by including animal dung and urine. In Ethiopia, more research is needed to find out how much compost is needed to get good yields in the different agro-ecological zones of the country. However, in Tigray, it has been found that compost added at the rate of 3.5–10 tonnes per hectare can give greatly improved yields, which are as good if not better than those from chemical fertilizer (see Edwards et al. 2007). 11.1 A rough guide on amounts of compost that can be made and used by farmers in Ethiopia The following is a guide on the amount of compost to aim to produce under different environmental conditions: 1) Mature compost to give a rate of 8-10 tonnes per hectare can be achieved in areas where there are plenty of composting materials, a good water supply and labour. Farmers working in groups are more likely to be able to produce large quantities of good quality compost than farmers working alone. These quantities have been achieved in Adi Abo Mossa village in Southern Tigray and Gimbichu district in Oromiya Regions. Some of the farmers producing bioslurry compost are also able to produce large quantities of compost in order to apply 8-10 tonnes per hectare on their fields. 2) Mature compost to give a rate around 7 tonnes per hectare can be achieved where there are medium amounts of composting materials, and water and labour are available. These quantities have been achieved by farmers working in Central Tigray near the town of Axum. 3) Mature compost to give a rate of around 3.5 tonnes per hectare can bring improved yields. This can be achieved even in areas with low availability of composting materials, as long as there is enough water to moisten the composting materials. These rates have been achieved by farmers in the semi-arid eastern parts of Tigray. 4) Where there are only small amounts of composting materials, e.g. for farmers who have very small plots of land and for women-headed households, working together to fill a common pit can make better quality compost than working alone. 5) If only a small quantity of compost is made, it is important to apply it properly to a small piece of land to make it as useful as possible, instead of spreading it thinly over a wider area. 6) Soil given compost in one year will not need it again in the next year as the good effects last for more than one growing season. The new compost can then be used for the part of the field that had no compost the previous year. Farmers that are able to apply the equivalent of 8-10 tonnes per hectare say that the good effects last for up to 3 years. 7) The study by Hailu (2010) indicated that the amount of compost applied per unit area varies based on the type of the soil and crop but generally more compost is applied in sandy soil and for the taller crop plants, while less amount of compost is applied in clay soil and for the smaller plants. For example, when a field is sown with teff, which is a small crop, an application of compost at a rate of 2.8 t/ha in clay soil and 4.8 t/ha in sandy soil can significantly improve the yield of both grain and straw. On the other hand, when a field is sown with barley, wheat or finger millet higher amounts of compost are applied than for teff (Table 4). According to the farmers, application of this amount of compost results in them getting better yields without the plants lodging as can often happen when chemical fertilizer is applied at the recommended rates. Table 4: Rough guide for farmers’ to apply compost by crop and soil type in t/ha Crop type Clay/ Walka – fertile Reddish/ Ba’ekhel – medium fertile Sandy/ Hutsa – infertile Teff 2.8 3.2 4.8 Barley/ Wheat/ Finger millet 3.2 3.4 5.0 Maize/Sorghum 3.4 4.0 6.0 Any legume crop No application rate is identified. 11.2 Compost production capacity The type and amount of biomass available varies from season to season (Figure 13). This is because all types of composting materials are not available through out the year. The results of Hailu’s study clearly showed that most of the green composting materials are available between July and October with the highest amounts in August and September. Dry materials, which can be stored, are available between October and March. During the dry season, water and green materials are in short supply, except in irrigated areas (Figure 13). Green materials are not easy to store for any long period. Farmers’ recommended season for compost making in Tahtai Maichew district near Axum is at the end of the rainy season i.e. August to September. However, it is possible to prepare compost through out the year based if there is a good source of compostable materials as well as water. If a farmer has a biogas digester plant that is fed every, the bioslurry effluent produced is over 90% water, so farmers do not need to get additional water to make compost. They only need to add other composting materials as described above. Figure 13: Compost biomass availability by type of composting material and month The crop and straw yields show that 6.4 t/ha of compost applied to the soil produced an equivalent yield to that from the application of 150 kg of mineral fertilizer in the clay soils of the Axum area (Hailu, 2010). The optimum number of animals with a farmer to produce 6.4 t/yr of compost is 3 cows and/or oxen. In the Axum area, 68 percent of the families owned the recommended number of cattle, while 24 percent of them could get enough animal manure to produce 3.2 tonnes of compost. The remaining 7.8 percent of the households studied were without domestic animals. However, in addition to the animal holding improving biomass management makes a difference in the biomass availability. Whenever farmers practiced good biomass management such as collecting and storing the weeds, animal bedding and left overs from feeding animals the farmers’ capacity to produce more compost improved. 11.2 Planning to make good quality compost The following factors need to be discussed and decisions made in order to prepare enough compost for a chosen piece of land. Making and using compost correctly needs labour, so farmers and their families have to be prepared to work hard to make good compost, and get good results from using it correctly. Every farmer, agricultural agent, supervisor and expert working in the area should be convinced about the use and importance of compost. If everyone is convinced, then all will be willing to work hard to get good results. Farmers and their families need to identify the materials in their fields, compounds and surroundings that can be used to make compost. The pit or pits for making compost should be near the source of materials, like the edge of a field for weeds and crop residues, or inside or near the family compound for waste from the house, home garden and animal pens. Farmers living near small towns and villages may be able to arrange to collect waste materials from the houses, hotels, and other institutions where food is made in the town or village, or have the waste materials brought to an area convenient for a group to make compost together. The youth in a village or small town can be trained to make compost from the wastes in the town, and to use it to grow their own vegetables or sell it to the farmers. Farmers should work with their development agents, supervisors, experts, and other persons to help them make decisions about how to make compost depending on the local availability of composting materials, the place where the compost is to be made and the fields where it is going to be used to improve the soil and crop yields.
http://necofakenya.wordpress.com/
13
54
Students engage in several online simulations and in-class investigations related to the density of liquids, solids, and gases. They apply new understanding about density to the design and construction of hot air balloons. They make informed predictions about the variables that may affect the launch of their homemade hot air balloons and test them. The finale is the “Got Gas?” rally where students display their balloons and use multimedia presentations to demonstrate the principles of density used in the construction of their hot air balloons. View how a variety of student-centered assessments are used in the Density: Got Gas? Unit Plan. These assessments help students and teachers set goals; monitor student progress; provide feedback; assess thinking, processes, performances, and products; and reflect on learning throughout the learning cycle. Present the Essential Question, How is science applied in the real world? Hold a general discussion on this question. Discuss properties of matter, such as color, shape, flexibility, strength, and as many other properties that students can brainstorm and why the properties might be important. Tell students that for the next few weeks, they will be investigating the property of density. Have them write everything they know about density and why density might be important. Have students investigate specific properties of matter with the layered liquids lab. In this lab, students layer mystery liquids and compare their relative densities. Give each team one set of equipment (see materials on the lab worksheet). The liquids are as follows: Each group should have 5 ml of each unknown liquid. Directions for the students are given in the "Procedure" section on the lab sheet. (Note that this procedure can also be done as a teacher-only demonstration.) Explain that students will move from comparing the density of liquids to investigating the density of solids. Have students navigate to Density*, an online simulation that encourages students to experiment with different variables and determine the effects of mass and volume on density. A teacher’s guide and related materials are included on the site. For extended learning, students can complete the Buoyancy Lab*, an online simulation that helps students further explore density properties by adjusting the density of a liquid to determine the effects on buoyancy of a solid object. Related handouts and materials are included on the site. Through these online simulations, students should make a connection between the Layered Liquids Lab and the online density labs. After the labs, discuss the Content Question, What are the relationships among mass, volume, and density? Expand on the investigations from the online density labs by discussing operational definitions and how to calculate density using the How Dense? lab. Explain to students that they will measure the absolute densities of liquids from the Layered Liquids Lab. Note that the liquids are the same as those compared in the Layering Liquids Lab. Each group's lab setup requires 25 ml of each sample liquid. Discuss procedures and data collection in advance of the activity. Explain to students that a bar graph would be appropriate for this type of data. Also, this would be a good opportunity to use a spreadsheet program to input the data and make various types of graphs. This would allow students to quickly see which types of graphs are most revealing and useful. For further investigation, students can navigate to Determination of Density of a Solid*, an Online Labs simulation that allows students to determine the density of various solids by using a virtual spring balance and a measuring cylinder. Before beginning, ask students what they learned about comparing the density of fluids that might help them think about measuring the density of a solid. (Mass is determined by comparing an object of unknown mass to an object of known mass, using a balance scale.) You may wish to review the lab animation* and answer any questions before students enter the simulator. After completing the simulation, have students complete the online quiz* to check for understanding. If you have limited Internet access or computing equipment, you can use the optional Solids Lab procedure instead. Following the procedures outlined in the document, students should be able to find the density of a variety of objects using the appropriate method by the end of this session. Note that students will need two cubes made from different materials (for example, steel and aluminum) and an irregular sample of either steel or aluminum to complete this lab. Ask students, If you put hot and cold water together, what will happen? Discuss predictions and then do the following hot/cold density demonstration: Ask students what the explanation might be for what was observed. Have students write or discuss what ways temperature affects density. Use online simulations to demonstrate what happens to gas molecules under different temperatures. Some recommended simulations include: Discuss scientific modeling and explain how molecules have been modeled in different ways over time. Discuss the density of gases as compared to solids and liquids. Applying Density Concepts to Hot Air Balloons Students are now ready to apply their knowledge about density in the construction of a hot air balloon. Present the Unit Questions: How does the density of specific matter affect the construction process? and What principles of density are applied in hot air balloons? Divide students into small groups. Announce that the class will be hosting the “Got Gas”? Hot Air Balloon Rally. The students’ task is to construct hot air balloons that will give riders the smoothest and longest flight. The students will work in groups and research how hot air balloons work and which variables to consider when constructing balloons. Guide this activity with the balloon research worksheet. Have groups create a balloon name and list as many variables that affect flight time as they can. Discuss these variables as a class, and have students expand and modify notes accordingly. Give each student an experiment data sheet, and present the problem, What causes some hot air balloons to have longer flight times than others? Instruct students to discuss this within their groups, and write hypothesis and prediction statements. (Help narrow the choices of independent variables to those relating to balloon weight, temperature difference inside and outside the balloon, wind speed, and direction.) Have each group make a chart showing independent, dependent, and constant variables. Instruct students to research the materials needed to build their balloons using the Internet sources listed. Students should consider the density of each of their chosen materials (such as straws, plastic sheeting, string, paper cups, and so forth) and provide a rationale for their choices. Have groups turn in a list of supplies needed to build their balloon and have those supplies ready by the next class or have students bring in their own supplies. A pattern of a hot air balloon is included as an example, or each group can find or make their own pattern. Students are now ready for construction day. Explain that groups should document the density of each type of material used in their hot air balloon and the rationale for choosing the material. They should also describe how they used principles of density to ensure a long flight time and smooth ride. Hold the “Got Gas?” Hot Air Balloon Rally! Assign each group a designated flight time. Flight is judged by time, integrity of materials, and smoothness of ride. Tell students to set up a data table and graph while waiting for flight times, and work on their presentations by drawing illustrations of their project to scan into later publications and/or taking pictures. Students can also use photo editing software or online drawing programs to create high-quality visual representations of their project. Share the student example slideshow and discuss the criteria for the presentations. Introduce the presentation rubric and keeping track brochure checklist. Explain that students are to complete two presentation projects: Have groups present their multimedia presentations and display their brochures. Have students self- and peer-assess their collaboration skills using the peer rubric and their presentations using the presentation rubric. Note: In addition to the student brochure and slideshow presentations, students may develop a class wiki* on the topic of density. Students use the density test practice to review the concepts of the density lessons and prepare for the short-answer and practical exam. Present the Essential Question again, How is science applied in the real world? Use density as the focus this time. Encourage students to further investigate this question by researching other examples when knowing the density of matter is applied to other situations (such as density of gold to identify fool’s gold, packaging material, body density, all construction projects, and so forth). English Language Learner Gina Aldridge participated in the Intel® Teach Program, which resulted in this idea for a classroom project. A team of teachers expanded the plan into the example you see here. Grade Level: 6-9 Subject(s): Physical Science Topics: Properties of Matter Higher-Order Thinking Skills: Analysis, Experimental Inquiry Key Learnings: Density, Scientific Method Time Needed: 4 weeks, 50-minute lessons, daily Background: From the Classroom in Mesa, Arizona, United States
http://www.intel.com/content/www/us/en/education/k12/project-design/unit-plans/got-gas.html
13
57
Due to licensing agreements, online viewing of the videos for this resource is restricted to network connections in the United States and Canada. 1. What Is Statistics? Using historical anecdotes and contemporary applications, this introduction to the series explores the vital links between statistics and our everyday world. The program also covers the evolution of the discipline. 2. Picturing Distributions With this program, students will see how key characteristics in the distribution of a histogram shape, center, and spread help professionals make decisions in such diverse fields as meteorology, television programming, health care, and air traffic control. Through a discussion of the advantages of back-to-back stem plots, this program also emphasizes the importance of seeking explanations for gaps and outliers in small data sets. 3. Describing Distributions This program examines the difference between mean and median, explains the use of quartiles to describe a distribution, and looks to the use of boxplots and the five-number summary for comparing and describing data. An illustrative example shows how a city government used statistical methods to correct inequity between men's and women's salaries. 4. Normal Distributions Students will advance from histograms through smooth curves to normal curves, and finally to a single normal curve for standardized measurement, as this program shows ways to describe the shape of a distribution using progressively simpler methods. In a lesson on creating a density curve, students also learn why, under steadily decreasing deviation, today's baseball players are less likely to achieve a .400 batting average. 5. Normal Calculations With this program, students will discover how to convert the standard normal and use the standard deviation; how to use a table of areas to compute relative frequencies; how to find any percentile; and how a computer creates a normal quartile plot to determine whether a distribution is normal. Vehicle emissions standards and medical studies of cholesterol provide real-life examples. 6. Time Series Statistics can reveal patterns over time. Using the concept of seasonal variation, this program shows ways to present smooth data and recognize whether a particular pattern is meaningful. Stock market trends and sleep cycles are used to explore the topics of deriving a time series and using the 68-95-99.7 rule to determine the control limits. 7. Models for Growth Topics of this program include linear growth, least squares, exponential growth, and straightening an exponential growth curve by logic. A study of growth problems in children serves to illustrate the use of the logarithm function to transform an exponential pattern into a line. The program also discusses growth in world oil production over time. 8. Describing Relationships Segments describe how to use a scatterplot to display relationships between variables. Patterns in variables (positive, negative, and linear association) and the importance of outliers are discussed. The program also calculates the least squares regression line of metabolic rate y on lean body mass x for a group of subjects and examines the fit of the regression line by plotting residuals. With this program, students will learn to derive and interpret the correlation coefficient using the relationship between a baseball player's salary and his home run statistics. Then they will discover how to use the square of the correlation coefficient to measure the strength and direction of a relationship between two variables. A study comparing identical twins raised together and apart illustrates the concept of correlation. 10. Multidimensional Data Analysis This program reviews the presentation of data analysis through an examination of computer graphics for statistical analysis at Bell Communications Research. Students will see how the computer can graph multivariate data and its various ways of presenting it. The program concludes with an example of a study that analyzes data on many variables to get a picture of environmental stresses in the Chesapeake Bay. 11. The Question of Causation Causation is only one of many possible explanations for an observed association. This program defines the concepts of common response and confounding, explains the use of two-way tables of percents to calculate marginal distribution, uses a segmented bar to show how to visually compare sets of conditional distributions, and presents a case of Simpson's Paradox. The relationship between smoking and lung cancer provides a clear example. 12. Experimental Design Statistics can be used to evaluate anecdotal evidence. This program distinguishes between observational studies and experiments and reviews basic principles of design including comparison, randomization, and replication. Case material from the Physician's Health Study on heart disease demonstrates the advantages of a double-blind experiment. 13. Blocking and Sampling Students learn to draw sound conclusions about a population from a tiny sample. This program focuses on random sampling and the census as two ways to obtain reliable information about a population. It covers single- and multi-factor experiments and the kinds of questions each can answer, and explores randomized block design through agriculturalists' efforts to find a better strawberry. 14. Samples and Surveys This program shows how to improve the accuracy of a survey by using stratified random sampling and how to avoid sampling errors such as bias. While surveys are becoming increasingly important tools in shaping public policy, a 1936 Gallup poll provides a striking illustration of the perils of undercoverage. 15. What Is Probability? Students will learn the distinction between deterministic phenomena and random sampling. This program introduces the concepts of sample space, events, and outcomes, and demonstrates how to use them to create a probability model. A discussion of statistician Persi Diaconis's work with probability theory covers many of the central ideas about randomness and probability. 16. Random Variables This program demonstrates how to determine the probability of any number of independent events, incorporating many of the same concepts used in previous programs. An interview with a statistician who helped to investigate the space shuttle accident shows how probability can be used to estimate the reliability of equipment. 17. Binomial Distributions This program discusses binomial distribution and the criteria for it, and describes a simple way to calculate its mean and standard deviation. An additional feature describes the quincunx, a randomizing device at the Boston Museum of Science, and explains how it represents the binomial distribution. 18. The Sample Mean and Control Charts The successes of casino owners and the manufacturing industry are used to demonstrate the use of the central limit theorem. One example shows how control charts allow us to effectively monitor random variation in business and industry. Students will learn how to create x-bar charts and the definitions of control limits and out-of-control limits. 19. Confidence Intervals This program lays out the parts of the confidence interval and gives an example of how it is used to measure the accuracy of long-term mean blood pressure. An example from politics and population surveys shows how margin of error and confidence levels are interpreted. The program also explains the use of a formula to convert the z* values into values on the sampling distribution curve. Finally, the concepts are applied to an issue of animal ethics. 20. Significance Tests This program explains the basic reasoning behind tests of significance and the concept of null hypothesis. The program shows how a z-test is carried out when the hypothesis concerns the mean of a normal population with known standard deviation. These ideas are explored by determining whether a poem "fits Shakespeare as well as Shakespeare fits Shakespeare." Court battles over discrimination in hiring provide additional illustration. 21. Inference for One Mean In this program, students discover an improved technique for statistical problems that involve a population mean: the t statistic for use when σ is not known. Emphasis is on paired samples and the t confidence test and interval. The program covers the precautions associated with these robust t procedures, along with their distribution characteristics and broad applications. 22. Comparing Two Means How to recognize a two-sample problem and how to distinguish such problems from one- and paired-sample situations are the subject of this program. A confidence interval is given for the difference between two means, using the two-sample t statistic with conservative degrees of freedom. 23. Inference for Proportions This program marks a transition in the series: from a focus on inference about the mean of a population to exploring inferences about a different kind of parameter, the proportion or percent of a population that has a certain characteristic. Students will observe the use of confidence intervals and tests for comparing proportions applied in government estimates of unemployment rates. 24. Inference for Two-Way Tables A two-way table of counts displays the relationship between two ways of classifying people or things. This program concerns inference about two-way tables, covering use of the chi-square test and null hypothesis in determining the relationship between two ways of classifying a case. The methods are used to investigate a possible relationship between a worker's gender and the type of job he or she holds. 25. Inference for Relationships With this program, students will understand inference for simple linear regression, emphasizing slope, and prediction. This unit presents the two most important kinds of inference: inference about the slope of the population line and prediction of the response for a given x. Although the formulas are more complicated, the ideas are similar to t procedures for the mean μ of a population. 26. Case Study This program presents a detailed case study of statistics at work. Operating in a real-world setting, the program traces the practice of statistics planning the data collection, collecting and picturing the data, drawing inferences from the data, and deciding how confident we can be about our conclusions. Students will begin to see the full range and power of the concepts and techniques they have learned.
http://www.learner.org/resources/series65.html
13
143
Mammal Species of the World Click here for The American Society of Mammalogists species account - Original description: Linnaeus, C., 1758. Systema Naturae per regna tria naturae, secundum classis, ordines, genera, species cum characteribus, differentiis, synonymis, locis. Tenth Edition. Laurentii Salvii, Stockholm, 1:42, 824 pp. DNA evidence shows that the lion, tiger, leopard, jaguar, snow leopard and clouded leopard share a common ancestor and that this group is between 6 and 10 million years old.The fossil record points to the emergence of Panthera just 2 to 3.8 million years ago.Phylogenetic studies have shown that the clouded leopard (Neofelis nebulosa) is basal to this group. The position of the remaining species varies between studies and is effectively unresolved.Analysis of jaguar mitochondrial DNA has dated the species lineage to between 280,000 and 510,000 years ago - later than suggested by fossil records. Jaguar females reach sexual maturity at about 2 years of age, and males at 3 or 4. The cat mates throughout the year, although births may increase when prey is plentiful.The female oestrous - time of heightened sexual activity - is 6–17 days out of a full 37-day cycle.Females advertise fertility with urinary scent marks and increased vocalisation.A jaguar’s pregnancy lasts 93–105 days.Females give birth most commonly to 2 cubs but can have up to 4.The life-span of a jaguar in the wild is estimated to be 12–15 years. Jaguars have a large distribution, they are found from southern Arizona and New Mexico south toward northern Argentina and northeastern Brazil. However, populations have been substantially reduced or eliminated in some areas, including El Salvador, the United States, and large portions of Mexico. Jaguars currently encompass a range of approximately 8.75 million square kilometers, or 46% of their historical range. The largest contiguous distribution of jaguars is concentrated in the Amazon Basin and includes portions of the Cerrado, Pantanal, and Chaco areas to the south. This range extends north and east to the Caribbean coast of Venezuela and Guianas. Populations have been reduced primarily in northern Mexico, United States, northern Brazil, and southern Argentina. Populations have been extirpated in the Argentina Monte region and the grasslands of the Pampas. Jaguars are not typically found at higher elevations, such as Pantepui or Puna montane grasslands. Biogeographic Regions: nearctic (Native ); neotropical (Native ) - Sanderson, E., K. Reford, C. Chetkiewicz, R. Medellin, A. Rabinowitz. 2002. Planning to save a species: the jaguar as a model. Conservation Biology, 16/1: 58-72. - Carrillo, E. 2007. Tracking the elusive jaguar. Natural History, 116/4: 30-34. Sanderson et al. (2002) presented a group exercise to define the most important areas for conservation of viable jaguar populations (Jaguar Conservation Units or JCUs). These 51 areas add up to 1.29 million km², or 13% of jaguar range. occurs (regularly, as a native taxon) in multiple nations Global Range: (>2,500,000 square km (greater than 1,000,000 square miles)) The jaguar once ranged throughout tropical lowlands of Mexico, Central America (now very rare except in Belize), and South America (to northern Argentina); in the United States, there are records from southern California, Arizona (Hoffmeister 1986, Johnson and Van Pelt 1997), New Mexico (Findley et al. 1975, Frey 2004), Texas (Schmidly 2004), and perhaps farther east in Louisiana; most records are from Arizona, where a minimum of 64 jaguars have been killed since 1900; some believe that a breeding population formerly existed in portions of the southwestern United States (Federal Register, 13 July 1994, 22 July 1997, which see for a state-by-state review of records). The species is now absent from much of the former range; it has been extirpated as a resident in most or all of the northern extent of the range in the southwestern United States and northern Mexico (see Federal Register, 13 July 1994, p. 35676, for discussion of recent records), El Salvador, Uruguay, developed areas of Brazilian coast, all but the northernmost parts of Argentina, and elsewhere. The largest remaining population is in Amazonian Brazil (Seymour 1989). In recent decades, jaguars occasionally have strayed into the United States in southern Arizona-New Mexico. U.S.A. (AZ, CA, LA, NM, TX), Mexico, Central and South America Jaguars are the largest cats in the Americas and the only representative of the genus Panthera. Height at the shoulder may be up to 75 cm. Body length is 150 to 180 cm long with a tail of 70 to 90 cm. Jaguars weigh between 68 and 136 kilograms. Jaguars are powerfully built, with large, square jaws and prominent cheeks. Jaguars have lean bodies and muscular limbs. They are built for power, not speed, although they can run briefly. A jaguar was observed draging a 34 kg sea turtle 91.5 meters into the cover of a forest. They hunt by pouncing on unsuspecting prey. Base coat colors range from pale yellow to reddish brown, with black, rosette-shaped spots on the neck, body, and limbs. The belly is off white. Black, or melanistic, jaguars are fairly common and are the result of a single, dominant allele. These jaguars have a base coat color of black with black spots that are usually dimly visible against the black background. Melanistic jaguars are more common in forested habitats. The largest jaguars are found in the Brazilian Pantanal, where males average 100 kg and females 76 kg. The smallest jaguars are found in Honduras, where males average 57 kg and females 42 kg. In general, jaguars found in dense forests are smaller than those found in more open habitats, possibly because densities of large ungulate prey are greater in open habitats. Male jaguars are generally 10 to 20% larger than females. The dental formula is: I 3/3, C 1/1, PM 3/2, and M 1/1. Range mass: 68 to 136 kg. Average mass: 100 kg. Range length: 1.5 to 1.85 m. Average length: 1.75 m. Average basal metabolic rate: 62.4190 cm^3 oxygen/hour. Other Physical Features: endothermic ; homoiothermic; bilateral symmetry Sexual Dimorphism: male larger Average basal metabolic rate: 62.419 W. - Grzimek, B. 1973. Grzimek's animal life encyclopedia. New York, NY: Van Nostrand Reinhold Complany. - Baker, W., S. Deem, A. Hunt, L. Munson, S. Johnson. 2002. Jaguar species survival plan. Pp. 9-13 in C Law, ed. Guidlines for captive management of jaguars, Vol. 1/1. Forth Worth, Texas: Jaguar Species Survival Plan Management Group. Length: 242 cm Weight: 136000 grams Size in North America Range: 1,100-1,850 mm Range: 31-158 kg Habitat and Ecology A 13 year old wild female was found with cub (Brown and Lopez-Gonzalez 2001). Density estimates ranged from 1.7-4 adults per 100 km² in studies in Brazil, Peru, Colombia and Mexico summarized by Sunquist and Sunqujist (2002). Density estimates by Silver et al. (2004) from five different study sites ranged from 2.4-8.8 adults per 100 km², with the highest densitiy found in Belize's Cockscomb Basin Wildlife Reserve (rainforest), a density similar to the 6-8 per 100 km² found by Rabinowitz and Nottingham (1986). That study found home ranges of females of 10 km², overlaped by male home ranges which varied from 28-40 km² and also overlapped extensively. In other areas jaguar home ranges have been over 1,000 km² (T. de Oliveira pers. comm. 2008). Soisalo and Cavalcanti (2006) used GPS-telemetry to check density estimates derived from a common camera trap methodology in the Brazilian Pantanal, and cautioned that the method may over-estimate population size. Telemetry data indicated a density of 6.6-6.7 adult jaguars per 100 km², while densities derived from Maximum Distance Moved (MMDM) extrapolations from camera trap captures were higher at 10.3-11.7/100 km². Jaguar densities in the Paraguayan Gran Chaco are 2.27–5.37 per 100 km² (Cullen Jr. et al. in submission), and in the Colombian Amazon, 4.5/100 km² in Amacayacu National Park and 2.5/100 km² in unprotected areas (Payan 2008). In Brazil, densities are 2 per 100 km² in the savannas of the Cerrado, 3.5/100 km² in the semiarid scrub of the Caatinga, and 2.2/100 km² in the Atlantic Forest (Silveira 2004, in litt. To T. de Oliveira 2008). Comments: Habitat includes a wide variety of situations, such as tropical and subtropical forests, lowland scrub and woodland, thorn scrub, pampas/llanos, desert, swampy savanna, mangrove swamps, lagoons, marshland, and floating islands of vegetation. At the southern extreme of the range, this cat inhabits open savanna, flooded grasslands, and desert mountains; at the northern extreme it may be found in chaparral and timbered areas. Young are born in a sheltered place such as a cave or thicket, under an uprooted tree, among rocks, or under a river bank (Seymour 1989). Jaguars prefer dense, tropical moist lowland forests that offer plenty of cover, although they are also found in scrubland, reed thickets, coastal forests, swamps, and thickets. Jaguars are excellent swimmers and are generally found in habitats near water, such as rivers, slow moving streams, lagoons, watercourses, and swamps. They are not typically found in arid areas. Jaguars have been reported from as high as 3800 m in Costa Rica, but they are generally not common in montane forests and are not found above 2700 meters in the Andes. In northern Mexico and southwestern United States, jaguars are found in oak woodlands, mesquite thickets, and riparian woodlands. Jaguars stalk their prey on the ground, preferring thick vegetation for cover. Jaguars are also able to climb trees for safety or to hunt. Jaguars require three habitat characteristics to support healthy populations: a water supply, dense cover, and sufficient prey. Range elevation: 10 to 2000 m. Average elevation: 100 m. Habitat Regions: temperate ; tropical ; terrestrial Terrestrial Biomes: forest ; rainforest ; scrub forest Other Habitat Features: riparian - Nowak, R. 1999. Walker's mammals of the world. Maryland: The Johns Hopkins University Press. - 1996. "IUCN - The World Conservation Union" (On-line). Jaguar (Panthera onca). Accessed December 31, 2008 at http://www.catsg.org/catsgportal/cat-website/20_cat-website/home/index_en.htm. Non-Migrant: Yes. At least some populations of this species do not make significant seasonal migrations. Juvenile dispersal is not considered a migration. Locally Migrant: No. No populations of this species make local extended movements (generally less than 200 km) at particular times of the year (e.g., to breeding or wintering grounds, to hibernation sites). Locally Migrant: No. No populations of this species make annual migrations of over 200 km. Comments: Feeds on large and small mammals, reptiles and ground nest- ing birds. Known to feed on peccaries, capybaras, tapirs, agoutis, deer, small crocodilians and turtles; opportunistic, see Seymour (1989) for further details. Hunts mostly on ground but may pounce on prey from tree or ledge. Jaguars are strictly carnivores. They eat a wide variety of prey, over 85 species have been reported in the diet of jaguars. Preferred prey are large animals, such as peccaries, tapirs, and deer. They also prey on caimans, turtles, snakes, porcupines, capybaras, fish, large birds, and many other animals. Jaguars typically attack prey by pouncing on them from a concealed spot. They either deliver a direct bite to the neck and then suffocate their prey, or they instantly kill them by piercing the back of the skull with their canines. Their powerful jaws and canines allow them to get through thick reptilian skin and turtle carapaces. Jaguars then drag their prey to a secluded spot where they eat them. Animal Foods: birds; mammals; reptiles; fish Primary Diet: carnivore (Eats terrestrial vertebrates) Known prey organisms This list may not be complete but is based on published studies. Jaguars are top predators and considered a keystone species because of their impact on the populations of other animals in the ecosystem. Internal parasites include lung flukes, tapeworms, hookworms, and whipworms. External parasites include ticks and warble fly larvae. Ecosystem Impact: keystone species - Labrona, M., R. Jorge, D. Sana. 2005. Ticks(acari: lxodida) on wild carnivores in Brazil. Experimental and Applied Acarology, 36/1: 151-165. - Glen, A., C. Dickman. 2005. Complex interactions among mammilian carnivores in Australia, and implications for wildlife management. Biological Reviews of the Cambridge Philosophical Society, 80/3: 387-401. - Seymour, K. 1989. Panthera onca. Mammalian Species, 340: 1-9. Humans are the primary predators of jaguars. Jaguars are victims of illegal poaching by humans and their pelts, paws, and teeth. They are cryptically colored and secretive, which helps them to hunt their prey and avoid detection by humans. - humans (Homo sapiens) Anti-predator Adaptations: cryptic Number of Occurrences Note: For many non-migratory species, occurrences are roughly equivalent to populations. Estimated Number of Occurrences: Unknown Comments: The number of occurrences or subpopulations is difficult to define for this species (individuals of which may range over vast areas) and not a very meaningful measure of conservation status. Population size and area of occupancy are more relevant considerations. However, see Sanderson et al. (2002), who identified jaguar-occupied areas that could be regarded as distinct occurrences or subpopulations. 10,000 - 1,000,000 individuals Comments: Total adult population size is unknown but surely exceeded 100,000 in the 1960s (annual kills in Brazil alone were estimated at 15,000 in the 1960s). However, based on estimates of density and geographic range (Nowell and Jackson 1996), the jaguar's total effective population size has been estimated at fewer than 50,000 mature breeding individuals. A population of 600-1,000 exists in Belize, and there may be 500 in Guatemala and no more than 500 in all of Mexico (see Nowak 1999). Studies in the 1980s estimated numbers in the Pantanal of Brazil and its peripheral area to range from 1,000 to 3,500 individuals with an additional 1,400 individuals to the north of the Pantanal in the Guapore River Basin (see Swank and Teer 1989). The Paraguayan Gran Chaco may host a few thousand jaguars based on densities of 1 per 25 to 75 square kilometers in an area of 176,000 square kilometers. Solitary and somewhat territorial, except during breeding season. Density estimated at 4/137 sq km in Brazil, 25-30 per 250 sq km in Belize (Seymour 1989). In Belize, daily home range may be only a few sq km, but may shift to new area every week or two. Home range in Brazil was estimated at 25-76 sq km (see Kitchener 1991). Major cause of mortality is hunting by humans. Although the jaguar prefers dense forest and lives mainly in South and Central American rain forests, it is occasionally found in open, seasonally-flooded wetlands and dry, grassland terrain.The jaguar prefers to live by rivers, swamps and in dense rainforest with thick cover for stalking prey. Life History and Behavior Communication and Perception Jaguars mainly communicate through vocalizations. Vocalizations are grunting "uhs" increasing in tone and power, while decreasing in frequency between grunts. The typical vocalization includes seven to a dozen grunts, depending on whether the individual is a male, female, or female entering estrus. Males generally have more powerful vocalizations than females, whose grunts are softer except when in estrus. During estrus, female jaguars call late into the night through early dawn, using 5 to 7 grunts to announce herself. Male vocalizations in response to estrus females are hoarse and guttural. This is taken advantage of by hunters, who use a hollow gourd to mimic this call and attract jaguars to the hunter. Jaguars advertise territories through vocalizations, scraping the ground and trees, and defecating and urinating on prominent locations. Communication Channels: visual ; acoustic ; chemical Perception Channels: visual ; acoustic Comments: Active throughout the year. Hunts primarily at night, but may be active day or night (Seymour 1989). Jaguars can live 11 to 12 years in the wild. Illness, accident, interactions with other animals, or hunting are major sources of mortality. In captivity jaguars may live over 20 years. Status: captivity: 28 (high) years. Status: captivity: 20 years. Status: wild: 11 to 12 years. Status: captivity: 20.0 years. Status: captivity: 22.0 years. Lifespan, longevity, and ageing In tropical areas may breed throughout the year; births most common November-December in Paraguay, December-May in Brazil, March-July in Argentina, July-September in Mexico, June-August in Belize. Gestation lasts about 90-115 days. Litter size is 1-4 (average 2). Young begin to eat meat at about 10-11 weeks, though may suckle 5-6 months; remain in den about 1.5-2 months; stay with mother 1.5-2 year; females sexually mature in 2-3 years, males in 3-4 years (Seymour 1989). Jaguars typically communicate through vocalizations. Females in estrus venture out of their territory to call during the morning and late at night, advertising for a mate. Males answer those calls with their own vocalizations and travel to her territory to mate, leading to competition between males for that mating opportunity. It is not uncommon for a female to travel with one or two male jaguars during estrus, although a dominant male will usually drive a smaller male away. Females do not tolerate the presence of males after mating and especially after their cubs are born. Mating System: polygynandrous (promiscuous) The estrus cycle is usually 37 days with estrus length of 6 to 17 days. Estrus may be indicated by behavioral changes such as lordosis, flehmen, vocalization, rolling, and increased scent marking. Males may show an increase in androgen levels throughout the year, but hormone levels peak during the time of receding flood waters in some areas. Jaguars may produce offspring year-round but mating typically increases during the months of December through March. Most births occur during the wet season, when prey is more abundant. Females give birth to 2 offspring (range 1 to 4) after a gestation period of 91 to 111 days. Female reproductive maturity occurs between 12 and 24 month, males become sexually mature at 24 to 36 months. Breeding interval: Females breed every two years. Breeding season: Jaguars may breed throughout the year, but most births occur in wet seasons, when prey is more abundant. Range number of offspring: 1 to 4. Average number of offspring: 2. Range gestation period: 91 to 111 days. Range birth mass: 700 to 900 g. Range weaning age: 5 to 6 months. Range time to independence: 1.75 to 2.5 years. Range age at sexual or reproductive maturity (female): 12 to 24 months. Range age at sexual or reproductive maturity (male): 24 to 36 months. Key Reproductive Features: iteroparous ; seasonal breeding ; year-round breeding ; gonochoric/gonochoristic/dioecious (sexes separate); viviparous Average number of offspring: 2. Cubs are born with their eyes closed and are completely dependent on their mother. Their eyes open around two weeks old. Cubs nurse until they are 5 to 6 months old, at which time they begin to hunt with their mother. They depend on their mother for protection from predators, for food, and for guidance and teaching as they grow. Offspring are dependent on their mother until they are almost two years old. Parental Investment: altricial ; female parental care ; pre-fertilization (Provisioning, Protecting: Female); pre-hatching/birth (Provisioning: Female, Protecting: Female); pre-weaning/fledging (Provisioning: Female, Protecting: Female); pre-independence (Provisioning: Female, Protecting: Female); extended period of juvenile learning - Grzimek, B. 1973. Grzimek's animal life encyclopedia. New York, NY: Van Nostrand Reinhold Complany. - 1996. "IUCN - The World Conservation Union" (On-line). Jaguar (Panthera onca). Accessed December 31, 2008 at http://www.catsg.org/catsgportal/cat-website/20_cat-website/home/index_en.htm. - Baker, W., S. Deem, A. Hunt, L. Munson, S. Johnson. 2002. Jaguar species survival plan. Pp. 9-13 in C Law, ed. Guidlines for captive management of jaguars, Vol. 1/1. Forth Worth, Texas: Jaguar Species Survival Plan Management Group. Molecular Biology and Genetics Statistics of barcoding coverage: Panthera onca Public Records: 0 Specimens with Barcodes: 2 Species With Barcodes: 1 IUCN Red List Assessment Red List Category Red List Criteria - 2002Near Threatened - 1996Lower Risk/near threatened - 1990Vulnerable(IUCN 1990) - 1988Vulnerable(IUCN Conservation Monitoring Centre 1988) - 1986Vulnerable(IUCN Conservation Monitoring Centre 1986) - 1982Vulnerable(Thornback and Jenkins 1982) National NatureServe Conservation Status Rounded National Status Rank: N1 - Critically Imperiled NatureServe Conservation Status Rounded Global Status Rank: G3 - Vulnerable Reasons: Large range extends from the southwestern U.S. to northern Argentina, but distribution and abundance have been drastically reduced due to habitat destruction, overexploitation by fur industry, illegal and excessive hunting, and predator control activities. Date Listed: 03/28/1972 Lead Region: Southwest Region (Region 2) Population location: entire Listing status: E For most current information and documents related to the conservation status and management of Panthera onca , see its USFWS Species Profile - deforestation across its habitat - increasing competition for food with humans - hurricanes in northern parts of its range - the behaviour of ranchers who will often kill the cat where it preys on livestock - all international trade in jaguars or their parts is prohibited - all hunting of jaguars is prohibited in Argentina, Belize, Colombia, French Guiana, Honduras, Nicaragua, Panama, Paraguay, Surinam, the United States (where it is listed as endangered under the Endangered Species Act), Uruguay and Venezuela - hunting jaguars is restricted to ‘problem animals’ in Brazil, Costa Rica, Guatemala, Mexico and Peru, while trophy hunting is still permitted in Bolivia - the species has no legal protection in Ecuador or Guyana Current conservation efforts focus on educating ranch owners and promoting eco-tourism.The jaguar is defined as an ‘umbrella species’ - a species whose home range and habitat requirements are sufficiently broad that if protected, numerous other species of smaller range will also be protected.In 1991, 600–1,000 jaguars - the highest total to date - were estimated to be living in Belize.In 2003 and 2004, researchers using GPS-telemetry found only 6 or 7 jaguars per 100 square kilometres in the Pantanal region of Brazil.Jaguar conservation occurs by protecting jaguar hotspots. These hotspots are large areas populated by about 50 jaguars.To maintain the species it is important that the jaguar gene pool is mixed, and this depends on jaguars being interconnected. A new project, the Paseo del Jaguar has been established to connect jaguar hotspots. Jaguars are considered near threatened by the IUCN. They are considered endangered by the U.S. Fish and Wildlife Service and are on Appendix I of CITES. Many populations remain stable but jaguars are threatened throughout most of their range by hunting, persecution, and habitat destruction. Jaguars are persecuted especially in areas of cattle ranching, where they are often shot on sight despite protective legislation. US Federal List: endangered CITES: appendix i State of Michigan List: no special status IUCN Red List of Threatened Species: near threatened Jaguar conservation projects Learn more about current and concluded jaguar conservation projects executed by NGOs, government ministries and research institutions in the Rainforest Alliance's Eco-Index: www.eco-index.org/search/keyword_complete.cfm?keyword=jaguar. - Eco-Index: www.eco-index.org Other high probability areas for long-term jaguar persistence include tropical moist lowland forest in Central America: the Selva Maya of Guatemla, Mexico and Belize; and a narrow strip of the Choco-Darien of Panama and Colombia to northern Honduras. Densities in the Belizean Selva Maya rainforest were estimated at 7.5-8.8/100 km² (Silver et al. 2004). The Talamanca Mountains of Costa Rica and Panama also host a populations, but the long term persistence is uncertain (Gonzalez-Maya et al. 2007). Eighteen percent of jaguar range (1.6 million km²) was estimated to have medium probability of long-term survival. These areas are generally adjacent to high-probability areas and include a large portion of the northern Cerrado, most of the Venezuelan and Colombian llanos, and the northern part of Colombia on the Caribbean coast. In Central America and Mexico, medium-probability areas include the highlands of Costa Rica and Panama, southern Mexico, and the two eastern mountain ranges of Mexico, Sierra de Taumalipas and the Sierra Madre Oriental. The remainder of jaguar range was classified as low probability for jaguar survival, and of most urgent conservation concern. These areas include the Atlantic Tropical Forest and Cerrado of Brazil; parts of the Chaco in northern Argentina; the Gran Sabana of northern Brazil, Venezuela and Guyana; parts of the coastal dry forest in Venezuela; and the remainder of the range in Central America and Mexico. Some of the most important areas for jaguar conservation (Jaguar Conservation Units) fell within parts of jaguar range where probability for long-term survival was considered low, and so represent the most endangered jaguar populations. These include the Atlantic Forests of Brazil, northern Argentina, central Honduras, and the Osa peninsula of Costa Rica (Sanderson et al. 2002). The Atlantic Forest subpopulation in Brazil has been estimated at 200+/- 80 adults (Leite et al. 2002). Jaguar populations in the Chaco region of northern Argentina and Brazil, and the Brazilian Caatinga, are low-density and highly threatened by livestock ranching and persecution (Altrichter et al. 2006, T. de Oliveira pers. comm. 2008). Global Short Term Trend: Decline of 30-70% Global Long Term Trend: Decline of 30-70% Comments: Sanderson et al. (2002) determined that, as of 1999, the known, occupied range of the jaguar had contracted to approximately 46% of estimates of its 1900 range; jaguar status and distribution were unknown in another 12% of the jaguar's former range, including large areas in Mexico, Colombia, and Brazil. Of the historical range, jaguars are known to have been extirpated in 37% of the area, while jaguar status in 18% of the area is unknown (Sanderson et al. 2002). Commercial hunting and trapping of jaguars for their pelts has declined drastically since the mid-1970's, when anti-fur campaigns and CITES controls progressively shut down international markets (Nowell and Jackson 1996). However, although hunting has decreased there is still demand for jaguar paws, teeth and other products. Degree of Threat: A : Very threatened throughout its range communities directly exploited or their composition and structure irreversibly threatened by man-made forces, including exotic species Comments: Rapid declines occurred in Central and South America during the 1960s due to human exploitation. During this period more than 15,000 skins were brought out of the Brazilian Amazon alone each year (see Weber and Rabinowitz 1996). Approximately 13,500 pelts entered the United States in 1968 (Nowak 1999). Subsequent national and international conservation agreements appear to have reduced the kill (Nowak 1999), but declines have continued due to deforestation and habitat fragmentation, blockage of movement corridors, excessive human exploitation of jaguar prey, human take due to conflicts with the livestock industry, illegal hunting, and predator control activities (Weber and Rabinowitz 1996). Populations isolated by deforestation probably incur increased vulnerability to killing by humans (many are shot on sight regardless of protection). Although direct killing and habitat destruction are responsible for declines, the importance of these activities varies regionally due to differences in habitat, prey availability, economic development, and cultural mores (Quigley and Crawshaw 1992). Future development in and around the Pantanal will eliminate populations (Quigley and Crawshaw 1992). With habitat fragmentation a major threat, and taxonomic research suggesting little significant differences among jaguar populations, an ambitious program has been launched to conserve a continuous north to south habitat corridor through the species range (Rabinowitz 2007). Addressing livestock management and problem animal issues is a high priority for conservation effort in many jaguar range countries. Biological Research Needs: Obtain better information on movments and population structure and dynamics. Global Protection: Several (4-12) occurrences appropriately protected and managed Comments: The jaguar is included in CITES Appendix I. It is fully protected at the national level across most of the range, with hunting prohibited in Argentina, Brazil, Colombia, French Guiana, Honduras, Nicaragua, Panama, Paraguay, Suriname, United States, Uruguay, and Venezuela, and hunting restrictions in place in Brazil, Costa Rica, Guatemala, Mexico, and Peru (Nowell and Jackson 1996). The species also occurs within protected areas in some of its range. Many large areas of habitat have been protected in the Neotropics and current conservation efforts focus on protection of a biological corridor along the land bridge between North and South America (Weber and Rabinowitz 1996). Needs: Protect large tracts of habitat with adequate prey and low levels of human activity. Relevance to Humans and Ecosystems Comments: In the 1960s, an estimated 15,000 were being killed annually (for the fur industry) in the Amazonian region of Brazil. The recorded number of pelts entering the U.S. in 1968 was 13,516. See Nowak (1991). Economic Importance for Humans: Negative Jaguars occasionally hunt cattle and other livestock, which leads to persecution by ranchers. Some countries, such as Brazil, Costa Rica, Guatemala, Mexico, and Peru, prohibit hunting jaguars to only "problem animals" that repeatedly kill livestock. Bolivia allows trophy hunting of jaguars. Jaguars do not attack humans without provocation. Occasionally jaguars have been observed following humans, but this is thought to be to "escort" them out of their territory. Negative Impacts: injures humans (bites or stings) Economic Importance for Humans: Positive Jaguars are top predators and keystone species in the ecosystems they inhabit. Jaguar pelts and furs are sold for profit, despite it being illegal to hunt them in most countries. The implementation of laws protecting jaguars has improved in recent years. Jaguars are also an important source of ecotourism income to local communities where jaguars might be observed. Positive Impacts: body parts are source of valuable material; ecotourism ; research and education The jaguar (pron.: // or UK //; Panthera onca) is a big cat, a feline in the Panthera genus, and is the only Panthera species found in the Americas. The jaguar is the third-largest feline after the tiger and the lion, and the largest in the Western Hemisphere. The jaguar's present range extends from Southern United States and Mexico across much of Central America and south to Paraguay and northern Argentina. Apart from a known and possibly breeding population in Arizona (southeast of Tucson), the cat has largely been extirpated from the United States since the early 20th century. This spotted cat most closely resembles the leopard physically, although it is usually larger and of sturdier build and its behavioural and habitat characteristics are closer to those of the tiger. While dense rainforest is its preferred habitat, the jaguar will range across a variety of forested and open terrains. It is strongly associated with the presence of water and is notable, along with the tiger, as a feline that enjoys swimming. The jaguar is largely a solitary, opportunistic, stalk-and-ambush predator at the top of the food chain (an apex predator). It is a keystone species, playing an important role in stabilizing ecosystems and regulating the populations of the animals it hunts. The jaguar has an exceptionally powerful bite, even relative to the other big cats. This allows it to pierce the shells of armoured reptiles and to employ an unusual killing method: it bites directly through the skull of prey between the ears to deliver a fatal bite to the brain. The jaguar is a near threatened species and its numbers are declining. Threats include loss and fragmentation of habitat. While international trade in jaguars or their parts is prohibited, the cat is still frequently killed by humans, particularly in conflicts with ranchers and farmers in South America. Although reduced, its range remains large; given its historical distribution, the jaguar has featured prominently in the mythology of numerous indigenous American cultures, including that of the Maya and Aztec. The word comes to English from one of the Tupi–Guarani languages, presumably the Amazonian trade language Tupinambá, via Portuguese jaguar. The Tupian word, yaguara "beast", is sometimes translated as "dog". The specific word for jaguar is yaguareté, with the suffix -eté meaning "real" or "true". The first component of its taxonomic designation, Panthera, is Latin, from the Greek word for leopard, πάνθηρ, the type species for the genus. This has been said to derive from the παν- "all" and θήρ from θηρευτής "predator", meaning "predator of all" (animals), though this may be a folk etymology—it may instead be ultimately of Sanskrit origin, from pundarikam, the Sanskrit word for "tiger". Onca is the Portuguese onça, with the cedilla dropped for typographical reasons, found in English as ounce for the snow leopard, Uncia uncia. It derives from the Latin lyncea lynx, with the letter L confused with the definite article (Italian lonza, Old French l'once). Taxonomy and evolution The jaguar, Panthera onca, is the only extant New World member of the Panthera genus. DNA evidence shows the lion, tiger, leopard, jaguar, snow leopard, and clouded leopard share a common ancestor, and that this group is between six and ten million years old; the fossil record points to the emergence of Panthera just two to 3.8 million years ago. Phylogenetic studies generally have shown the clouded leopard (Neofelis nebulosa) is basal to this group. The position of the remaining species varies between studies and is effectively unresolved. Based on morphological evidence, British zoologist Reginald Pocock concluded the jaguar is most closely related to the leopard. However, DNA evidence is inconclusive and the position of the jaguar relative to the other species varies between studies. Fossils of extinct Panthera species, such as the European jaguar (Panthera gombaszoegensis) and the American lion (Panthera atrox), show characteristics of both the lion and the jaguar. Analysis of jaguar mitochondrial DNA has dated the species' lineage to between 280,000 and 510,000 years ago, later than suggested by fossil records. Asian ancestry While jaguars now live only in the Americas, they are descended from Old World cats. Two million years ago, scientists believe, the jaguar and its closest relative, the similarly spotted leopard, shared a common ancestor in Asia. In the early Pleistocene, the forerunners of modern jaguars crept across Beringia, the land bridge that once spanned the Bering Strait and connected Asia and North America. These jaguar ancestors then moved south into Central and South America, feeding on the deer and other grazing animals that once covered the landscape in huge herds. Geographical variation The last taxonomic delineation of the jaguar subspecies was performed by Pocock in 1939. Based on geographic origins and skull morphology, he recognized eight subspecies. However, he did not have access to sufficient specimens to critically evaluate all subspecies, and he expressed doubt about the status of several. Later consideration of his work suggested only three subspecies should be recognized. Recent studies have also failed to find evidence for well-defined subspecies, and are no longer recognized. Larson (1997) studied the morphological variation in the jaguar and showed there is clinal north–south variation, but also the differentiation within the supposed subspecies is larger than that between them, and thus does not warrant subspecies subdivision. A genetic study by Eizirik and coworkers in 2001 confirmed the absence of a clear geographical subspecies structure, although they found that major geographical barriers, such as the Amazon River, limited the exchange of genes between the different populations. A subsequent, more-detailed study confirmed the predicted population structure within the Colombian jaguars. - Panthera onca onca: Venezuela through the Amazon, including - P. o. peruviana (Peruvian jaguar): Coastal Peru - P. o. hernandesii (Mexican jaguar'): Western Mexico – including - P. o. palustris (the largest subspecies, weighing more than 135 kg or 300 lb): The Pantanal regions of Mato Grosso and Mato Grosso do Sul, Brazil, along the Paraguay River into Paraguay and northeastern Argentina. The Mammal Species of the World continues to recognize nine subspecies, the eight subspecies above and additionally P. o. paraguensis. Biology and behavior Physical characteristics The jaguar is a compact and well-muscled animal. Size and weight vary considerably: weights are normally in the range of 56–96 kg (124–211 lb). Larger males have been recorded to weigh as much as 160 kg (350 lb) (roughly matching a tigress or lioness), and the smallest females have low weights of 36 kg (79 lb). Females are typically 10–20% smaller than males. The length, from the nose to the base of the tail, of the cats varies from 1.2 to 1.95 m (3.9 to 6.4 ft). Their tails are the shortest of any big cat, at 45 to 75 cm (18 to 30 in) in length. Their legs are also short, considerably shorter when compared to a small tiger or lion in a similar weight range, but are thick and powerful. The jaguar stands 63 to 76 cm (25 to 30 in) tall at the shoulders. Compared to the similarly colored Old World leopard, this cat is bigger, heavier and relatively stocky in build. Further variations in size have been observed across regions and habitats, with size tending to increase from the north to south. A study of the jaguar in the Chamela-Cuixmala Biosphere Reserve on the Mexican Pacific coast, showed ranges of just about 50 kg (110 lb), about the size of the cougar. By contrast, a study of the jaguar in the Brazilian Pantanal region found average weights of 100 kg (220 lb), and weights of 136 kilograms (300 lb) or more are not uncommon in old males. Forest jaguars are frequently darker and considerably smaller than those found in open areas (the Pantanal is an open wetland basin), possibly due to the smaller numbers of large, herbivorous prey in forest areas. A short and stocky limb structure makes the jaguar adept at climbing, crawling, and swimming. The head is robust and the jaw extremely powerful. The jaguar has the strongest bite of all felids, capable of biting down with 2,000 lbf (910 kgf). This is twice the strength of a lion and the second strongest of all mammals after the spotted hyena; this strength adaptation allows the jaguar to pierce turtle shells. A comparative study of bite force adjusted for body size ranked it as the top felid, alongside the clouded leopard and ahead of the lion and tiger. It has been reported that "an individual jaguar can drag a 360 kg (800 lb) bull 8 m (25 ft) in its jaws and pulverize the heaviest bones". The jaguar hunts wild animals weighing up to 300 kg (660 lb) in dense jungle, and its short and sturdy physique is thus an adaptation to its prey and environment. The base coat of the jaguar is generally a tawny yellow, but can range to reddish-brown and black, for most of the body. However, the ventral areas are white. The cat is covered in rosettes for camouflage in the dappled light of its forest habitat. The spots vary over individual coats and between individual jaguars: rosettes may include one or several dots, and the shapes of the dots vary. The spots on the head and neck are generally solid, as are those on the tail, where they may merge to form a band. While the jaguar closely resembles the leopard, it is sturdier and heavier, and the two animals can be distinguished by their rosettes: the rosettes on a jaguar's coat are larger, fewer in number, usually darker, and have thicker lines and small spots in the middle that the leopard lacks. Jaguars also have rounder heads and shorter, stockier limbs compared to leopards. Color morphism The black morph is less common than the spotted form but, at about six percent of the population, it is several orders of magnitude above the rate of mutation. Hence, it is being supported by selection. Some evidence indicates the melanism allele is dominant. The black form may be an example of heterozygote advantage; breeding in captivity is not yet conclusive on this. Extremely rare albino individuals, sometimes called white panthers, also occur among jaguars, as with the other big cats. As usual with albinos in the wild, selection keeps the frequency close to the rate of mutation. Reproduction and life cycle Jaguar females reach sexual maturity at about two years of age, and males at three or four. The cat is believed to mate throughout the year in the wild, although births may increase when prey is plentiful. Research on captive male jaguars supports the year-round mating hypothesis, with no seasonal variation in semen traits and ejaculatory quality; low reproductive success has also been observed in captivity. Female estrus is 6–17 days out of a full 37-day cycle, and females will advertise fertility with urinary scent marks and increased vocalization. Both sexes will range more widely than usual during courtship. Mating pairs separate after the act, and females provide all parenting. The gestation period lasts 93–105 days; females give birth to up to four cubs, and most commonly to two. The mother will not tolerate the presence of males after the birth of cubs, given a risk of infanticide; this behaviour is also found in the tiger. The young are born blind, gaining sight after two weeks. Cubs are weaned at three months, but remain in the birth den for six months before leaving to accompany their mother on hunts. They will continue in their mother's company for one to two years before leaving to establish a territory for themselves. Young males are at first nomadic, jostling with their older counterparts until they succeed in claiming a territory. Typical lifespan in the wild is estimated at around 12–15 years; in captivity, the jaguar lives up to 23 years, placing it among the longest-lived cats. Social activity Like most cats, the jaguar is solitary outside mother-cub groups. Adults generally meet only to court and mate (though limited noncourting socialization has been observed anecdotally) and carve out large territories for themselves. Female territories, which range from 25 to 40 km2 in size, may overlap, but the animals generally avoid one another. Male ranges cover roughly twice as much area, varying in size with the availability of game and space, and do not overlap. The jaguar uses scrape marks, urine, and faeces to mark its territory. Like the other big cats, the jaguar is capable of roaring and does so to warn territorial and mating competitors away; intensive bouts of counter-calling between individuals have been observed in the wild. Their roar often resembles a repetitive cough, and they may also vocalize mews and grunts. Mating fights between males occur, but are rare, and aggression avoidance behaviour has been observed in the wild. When it occurs, conflict is typically over territory: a male's range may encompass that of two or three females, and he will not tolerate intrusions by other adult males. The jaguar is often described as nocturnal, but is more specifically crepuscular (peak activity around dawn and dusk). Both sexes hunt, but males travel farther each day than females, befitting their larger territories. The jaguar may hunt during the day if game is available and is a relatively energetic feline, spending as much as 50–60% of its time active. The jaguar's elusive nature and the inaccessibility of much of its preferred habitat make it a difficult animal to sight, let alone study. Hunting and diet Like all cats, the jaguar is an obligate carnivore, feeding only on meat. It is an opportunistic hunter and its diet encompasses at least 87 species. The jaguar can take virtually any terrestrial or riparian vertebrate found in Central or South America, with a preference for large prey. They regularly take adult caimans, deer, capybaras, tapirs, peccaries, dogs, foxes, and sometimes even anacondas. However, the cat will eat any small species that can be caught, including frogs, mice, birds (mainly ground-based species such as cracids), fish, sloths, monkeys, and turtles; a study conducted in Cockscomb Basin Wildlife Sanctuary in Belize, for example, revealed the diets of jaguars there consisted primarily of armadillos and pacas. Some jaguars will also take domestic livestock, including adult cattle and horses. While the jaguar often employs the deep throat-bite and suffocation technique typical among Panthera, it sometimes uses a killing method unique amongst cats: it pierces directly through the temporal bones of the skull between the ears of prey (especially the capybara) with its canine teeth, piercing the brain. This may be an adaptation to "cracking open" turtle shells; following the late Pleistocene extinctions, armoured reptiles such as turtles would have formed an abundant prey base for the jaguar. The skull bite is employed with mammals in particular; with reptiles such as the caiman, the jaguar may leap on to the back of the prey and sever the cervical vertebrae, immobilizing the target. While capable of cracking turtle shells, the jaguar may simply smash into the shell with its paw and scoop out the flesh. When attacking sea turtles as they try to nest on beaches, the jaguar will bite at the head, often beheading the prey, before dragging it off to eat. Reportedly, while hunting horses, a jaguar may leap onto their back, place one paw on the muzzle and another on the nape and then twist, dislocating the neck. Local people have ancedotally reported that when hunting a pair of horses bound together, the jaguar will kill one horse and then drag it while the other horse, still living, is dragged in their wake. With prey such as smaller dogs, a paw swipe to the skull may be sufficient to kill it. The jaguar is a stalk-and-ambush rather than a chase predator. The cat will walk slowly down forest paths, listening for and stalking prey before rushing or ambushing. The jaguar attacks from cover and usually from a target's blind spot with a quick pounce; the species' ambushing abilities are considered nearly peerless in the animal kingdom by both indigenous people and field researchers, and are probably a product of its role as an apex predator in several different environments. The ambush may include leaping into water after prey, as a jaguar is quite capable of carrying a large kill while swimming; its strength is such that carcasses as large as a heifer can be hauled up a tree to avoid flood levels. On killing prey, the jaguar will drag the carcass to a thicket or other secluded spot. It begins eating at the neck and chest, rather than the midsection. The heart and lungs are consumed, followed by the shoulders. The daily food requirement of a 34 kg (75 lb) animal, at the extreme low end of the species' weight range, has been estimated at 1.4 kg (3.1 lb). For captive animals in the 50–60 kg (110–130 lb) range, more than 2 kg (4.4 lb) of meat daily are recommended. In the wild, consumption is naturally more erratic; wild cats expend considerable energy in the capture and kill of prey, and they may consume up to 25 kg (55 lb) of meat at one feeding, followed by periods of famine. Unlike all other species in the Panthera genus, jaguars very rarely attack humans. Most of the scant cases where jaguars turn to taking a human show the animal is either old with damaged teeth or is wounded. Sometimes, if scared or threatened, jaguars in captivity may lash out at zookeepers. Distribution and habitat It has been an American cat since crossing the Bering Land Bridge during the Pleistocene epoch; the immediate ancestor of modern animals is Panthera onca augusta, which was larger than the contemporary cat. Its present range extends from Mexico, through Central America and into South America, including much of Amazonian Brazil. The countries included in this range are Argentina, Belize, Bolivia, Brazil, Colombia, Costa Rica (particularly on the Osa Peninsula), Ecuador, French Guiana, Guatemala, Guyana, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Suriname, the United States and Venezuela. The jaguar is now extinct in El Salvador and Uruguay. It occurs in the 400 km² Cockscomb Basin Wildlife Sanctuary in Belize, the 5,300 km² Sian Ka'an Biosphere Reserve in Mexico, the approximately 15,000 km2 Manú National Park in Peru, the approximately 26,000 km2 Xingu National Park in Brazil, and numerous other reserves throughout its range. The inclusion of the United States in the list is based on occasional sightings in the southwest, particularly in Arizona, New Mexico and Texas. In the early 20th century, the jaguar's range extended as far north as the Grand Canyon, and as far west as Southern California. The jaguar is a protected species in the United States under the Endangered Species Act, which has stopped the shooting of the animal for its pelt. In 1996 and from 2004 on, wildlife officials in Arizona photographed and documented jaguars in the southern part of the state. Between 2004 and 2007, two or three jaguars have been reported by researchers around Buenos Aires National Wildlife Refuge in southern Arizona. One of them, called 'Macho B', had been previously photographed in 1996 in the area. For any permanent population in the USA to thrive, protection from killing, an adequate prey base, and connectivity with Mexican populations are essential. On 25 February 2009, a 53.5 kg (118 lb)-Jaguar was caught, radio-collared and released in an area southwest of Tucson, Arizona; this is farther north than had previously been expected and represents a sign there may be a permanent breeding population of jaguars within southern Arizona. The animal was later confirmed to be indeed the same male individual ('Macho B') that was photographed in 2004. On 2 March 2009, Macho B was recaptured and euthanized after he was found to be suffering from kidney failure; the animal was thought to be 16 years old, older than any known wild jaguar. Completion of the United States–Mexico barrier as currently proposed will reduce the viability of any population currently residing in the United States, by reducing gene flow with Mexican populations, and prevent any further northward expansion for the species. The historic range of the species included much of the southern half of the United States, and in the south extended much farther to cover most of the South American continent. In total, its northern range has receded 1,000 km (621 mi) southward and its southern range 2,000 km (1243 mi) northward. Ice age fossils of the jaguar, dated between 40,000 and 11,500 years ago, have been discovered in the United States, including some at an important site as far north as Missouri. Fossil evidence shows jaguars of up to 190 kg (420 lb), much larger than the contemporary average for the animal. The habitat of the cat includes the rain forests of South and Central America, open, seasonally flooded wetlands, and dry grassland terrain. Of these habitats, the jaguar much prefers dense forest; the cat has lost range most rapidly in regions of drier habitat, such as the Argentinian pampas, the arid grasslands of Mexico, and the southwestern United States. The cat will range across tropical, subtropical, and dry deciduous forests (including, historically, oak forests in the United States). The jaguar is strongly associated with water, and it often prefers to live by rivers, swamps, and in dense rainforest with thick cover for stalking prey. Jaguars have been found at elevations as high as 3,800 m, but they typically avoid montane forest and are not found in the high plateau of central Mexico or in the Andes. Substantial evidence exists for a colony of nonnative, melanistic leopards or jaguars inhabiting the rainforests around Sydney, Australia. A local report compiled statements from over 450 individuals recounting their stories of sighting large black cats in the area, and confidential NSW Government documents regarding the matter proved wildlife authorities were so concerned about the big cats and the danger to humans, they commissioned an expert to catch one. The three-day hunt later failed, but ecologist Johannes J. Bauer warned: "Difficult as it seems to accept, the most likely explanation is the presence of a large, feline predator. In this area, [it is] most likely a leopard, less likely a jaguar." Ecological role The adult jaguar is an apex predator, meaning it exists at the top of its food chain and is not preyed on in the wild. The jaguar has also been termed a keystone species, as it is assumed, through controlling the population levels of prey such as herbivorous and granivorous mammals, apex felids maintain the structural integrity of forest systems. However, accurately determining what effect species like the jaguar have on ecosystems is difficult, because data must be compared from regions where the species is absent as well as its current habitats, while controlling for the effects of human activity. It is accepted that mid-sized prey species undergo population increases in the absence of the keystone predators, and this has been hypothesized to have cascading negative effects. However, field work has shown this may be natural variability and the population increases may not be sustained. Thus, the keystone predator hypothesis is not accepted by all scientists. The jaguar also has an effect on other predators. The jaguar and the cougar, the next-largest feline of the Americas, are often sympatric (related species sharing overlapping territory) and have often been studied in conjunction. Where sympatric with the jaguar, the cougar is smaller than normal and is smaller than the local jaguars. The jaguar tends to take larger prey, usually over 22 kg (49 lb) and the cougar smaller, usually between 2 and 22 kg (4.4 and 49 lb), reducing the latter's size. This situation may be advantageous to the cougar. Its broader prey niche, including its ability to take smaller prey, may give it an advantage over the jaguar in human-altered landscapes; while both are classified as near-threatened species, the cougar has a significantly larger current distribution. Conservation status Jaguar populations are rapidly declining. The animal is considered Near Threatened by the International Union for Conservation of Nature and Natural Resources, meaning it may be threatened with extinction in the near future. The loss of parts of its range, including its virtual elimination from its historic northern areas and the increasing fragmentation of the remaining range, have contributed to this status. The 1960s had particularly significant declines, with more than 15,000 jaguar skins brought out of the Brazilian Amazon yearly; the Convention on International Trade in Endangered Species of 1973 brought about a sharp decline in the pelt trade. Detailed work performed under the auspices of the Wildlife Conservation Society revealed the animal has lost 37% of its historic range, with its status unknown in an additional 18%. More encouragingly, the probability of long-term survival was considered high in 70% of its remaining range, particularly in the Amazon basin and the adjoining Gran Chaco and Pantanal. The major risks to the jaguar include deforestation across its habitat, increasing competition for food with human beings, poaching, hurricanes in northern parts of its range, and the behaviour of ranchers who will often kill the cat where it preys on livestock. When adapted to the prey, the jaguar has been shown to take cattle as a large portion of its diet; while land clearance for grazing is a problem for the species, the jaguar population may have increased when cattle were first introduced to South America, as the animals took advantage of the new prey base. This willingness to take livestock has induced ranch owners to hire full-time jaguar hunters, and the cat is often shot on sight. The jaguar is regulated as an Appendix I species under CITES: all international trade in jaguars or their parts is prohibited. All hunting of jaguars is prohibited in Argentina, Belize, Colombia, French Guiana, Honduras, Nicaragua, Panama, Paraguay, Suriname, the United States (where it is listed as endangered under the Endangered Species Act), Uruguay and Venezuela. Hunting of jaguars is restricted to "problem animals" in Brazil, Costa Rica, Guatemala, Mexico and Peru, while trophy hunting is still permitted in Bolivia. The species has no legal protection in Ecuador or Guyana. Current conservation efforts often focus on educating ranch owners and promoting ecotourism. The jaguar is generally defined as an umbrella species – its home range and habitat requirements are sufficiently broad that, if protected, numerous other species of smaller range will also be protected. Umbrella species serve as "mobile links" at the landscape scale, in the jaguar's case through predation. Conservation organizations may thus focus on providing viable, connected habitat for the jaguar, with the knowledge other species will also benefit. Given the inaccessibility of much of the species' range, particularly the central Amazon, estimating jaguar numbers is difficult. Researchers typically focus on particular bioregions, thus species-wide analysis is scant. In 1991, 600–1,000 (the highest total) were estimated to be living in Belize. A year earlier, 125–180 jaguars were estimated to be living in Mexico's 4,000-km2 (2400-mi2) Calakmul Biosphere Reserve, with another 350 in the state of Chiapas. The adjoining Maya Biosphere Reserve in Guatemala, with an area measuring 15,000 km2 (9,000 mi2), may have 465–550 animals. Work employing GPS telemetry in 2003 and 2004 found densities of only six to seven jaguars per 100 km2 in the critical Pantanal region, compared with 10 to 11 using traditional methods; this suggests the widely used sampling methods may inflate the actual numbers of cats. In the past, conservation of jaguars sometimes occurred through the protection of jaguar "hotspots". These hotspots, described as jaguar conservation units, were large areas populated by about 50 jaguars. However, some researchers recently determined, to maintain a robust sharing of the jaguar gene pool necessary for maintaining the species, it is important that the jaguars are interconnected. To facilitate this, a new project, the Paseo del Jaguar, has been established to connect several jaguar hotspots. Jaguar in the United States The only extant cat native to North America that roars, the jaguar was recorded as an animal of the Americas by Thomas Jefferson in 1799. There are multiple zoological reports of jaguar in California, two as far north as Monterey in 1814 (Langsdorff) and 1826 (Beechey). The coastal Diegueño (Kumeyaay people) of San Diego and Cahuilla Indians of Palm Springs had words for jaguar and the cats persisted there until about 1860. The only recorded description of an active jaguar den with breeding adults and kittens in the U.S. was in the Tehachapi Mountains of California prior to 1860. In 1843, Rufus Sage, an explorer and experienced observer recorded jaguar present on the headwaters of the North Platte River 30–50 miles north of Long's Peak in Colorado. Cabot's 1544 map has a drawing of jaguar ranging over the Pennsylvania and Ohio valleys. Historically, the jaguar was recorded in far eastern Texas, and the northern parts of Arizona and New Mexico. However, since the 1940s, the jaguar has been limited to the southern parts of these states. Although less reliable than zoological records, native American artefacts with possible jaguar motifs range from the Pacific Northwest to Pennsylvania and Florida. Jaguars were rapidly eliminated by Anglo-Americans in the United States. The last female jaguar in the United States was shot by a hunter in Arizona's White Mountains in 1963. In 1969, Arizona outlawed most jaguar hunting, but with no females known to be at large, there was little hope the population could rebound. During the next 25 years, only two jaguars were documented in the United States, both killed: a large male shot in 1971 near the Santa Cruz River by two teenage duck hunters, and another male cornered by hounds in the Dos Cabezas Mountains in 1986. Then in 1996, Warner Glenn, a rancher and hunting guide from Douglas, Arizona, came across a jaguar in the Peloncillo Mountains and became a jaguar researcher, placing webcams which recorded four more Arizona jaguars. On November 19, 2011, a 200-pound male jaguar was photographed near Cochise in southern Arizona by a hunter after being treed by his dogs (the animal left the scene unharmed). This is the last jaguar seen since another male, named Macho B, died shortly after being radio-collared by Arizona Game and Fish Department (AGFD) officials in March, 2009. In the Macho B incident, a former AGFD subcontractor pleaded guilty to violating the endangered species act for trapping the cat and a Game and Fish employee was fired for lying to federal investigators. None of the other four male jaguars sighted in Arizona in the last 15 years have been seen since 2006. However, a second 2011 sighting of an Arizona jaguar was reported by a Homeland Security border pilot in June 2011, and conservation researchers sighted two jaguars within 30 miles of the Mexico/U.S. border in 2010. In September 2012, a jaguar was photographed in the Santa Rita Mountains of Arizona, the second such sighting in this region in two years. Legal action by the Center for Biological Diversity led to federal listing of the cat on the endangered species list in 1997. However, on January 7, 2008, George W. Bush appointee H. Dale Hall, Director of the United States Fish and Wildlife Service (USFWS), signed a recommendation to abandon jaguar recovery as a federal goal under the Endangered Species Act. Critics, including the Center of Biological Diversity and New Mexico Department of Game and Fish, were concerned the jaguar was being sacrificed for the government's new border fence, which is to be built along many of the cat's typical crossings between the United States and Mexico. In 2010, the Obama Administration reversed the Bush Administration policy and pledged to protect "critical habitat" and draft a recovery plan for the species. The USFWS was ultimately ordered by the court to develop a jaguar recovery plan and designate critical habitat for the cats. On August 20, 2012 USFWS proposed setting aside 838,232 acres in Arizona and New Mexico—an area larger than Rhode Island—as critical jaguar habitat. In mythology and culture Pre-Columbian Americas In pre-Columbian Central and South America, the jaguar has long been a symbol of power and strength. Among the Andean cultures, a jaguar cult disseminated by the early Chavín culture became accepted over most of what is today Peru by 900 BC. The later Moche culture of northern Peru used the jaguar as a symbol of power in many of their ceramics. In Mesoamerica, the Olmec—an early and influential culture of the Gulf Coast region roughly contemporaneous with the Chavín—developed a distinct "were-jaguar" motif of sculptures and figurines showing stylised jaguars or humans with jaguar characteristics. In the later Maya civilization, the jaguar was believed to facilitate communication between the living and the dead and to protect the royal household. The Maya saw these powerful felines as their companions in the spiritual world, and a number of Maya rulers bore names that incorporated the Mayan word for jaguar (b'alam in many of the Mayan languages). The Aztec civilization shared this image of the jaguar as the representative of the ruler and as a warrior. The Aztecs formed an elite warrior class known as the Jaguar Knights. In Aztec mythology, the jaguar was considered to be the totem animal of the powerful deity Tezcatlipoca. Contemporary culture The jaguar and its name are widely used as a symbol in contemporary culture. It is the national animal of Guyana, and is featured in its coat of arms. The flag of the Department of Amazonas, a Colombian department, features a black jaguar silhouette pouncing towards a hunter. The jaguar also appears in banknotes of Brazilian real. The jaguar is also a common fixture in the mythology of many contemporary native cultures in South America, usually being portrayed as the creature which gave humans the power over fire. Jaguar is widely used as a product name, most prominently for a British luxury car brand. The name has been adopted by sports franchises, including the NFL's Jacksonville Jaguars and the Mexican football club Jaguares de Chiapas. Grammy-winning Mexican rock band "Jaguares" were also influenced by the magnificent animal to choose their band name. The crest of Argentina's national federation in rugby union features a jaguar; however, because of a historic accident, the country's national team is nicknamed Los Pumas. The country's "A" (second-level) national team in that sport now bears the Jaguars name. In the spirit of the ancient Mayan culture, the 1968 Olympics in Mexico City adopted a red jaguar as the first official Olympic mascot. See also - Wozencraft, W. C. (2005). "Order Carnivora". In Wilson, D. E.; Reeder, D. M. Mammal Species of the World (3rd ed.). Johns Hopkins University Press. pp. 546–547. ISBN 978-0-8018-8221-0. OCLC 62265494. - Caso, A., Lopez-Gonzalez, C., Payan, E., Eizirik, E., de Oliveira, T., Leite-Pitman, R., Kelly, M. & Valderrama, C. (2008). "Panthera onca". IUCN Red List of Threatened Species. Version 2011.1. International Union for Conservation of Nature. Retrieved 7 July 2011. Database entry includes justification for why this species is near threatened. - Wroe, Stephen; McHenry, Colin and Thomason, Jeffrey (2006). "Bite club: comparative bite force in big biting mammals and the prediction of predatory behavior in fossil taxa" (PDF). Proceedings of the Royal Society B 272 (1563): 619–25. doi:10.1098/rspb.2004.2986. PMC 1564077. PMID 15817436. Archived from the original on 2006-09-21. Retrieved 2006-08-07. - Hamdig, Paul. "Sympatric Jaguar and Puma". Ecology Online Sweden via archive.org. Archived from the original on 2008-02-01. Retrieved 2009-03-19. - de la Rosa, Carlos Leonardo and Nocke, Claudia C. (2000). A guide to the carnivores of Central America: natural history, ecology, and conservation. The University of Texas Press. p. 25. ISBN 978-0-292-71604-9. - "Jaguar". Online Etymology Dictionary. Douglas Harper. Retrieved 2006-08-06. - "Breve Vocabulario" (in Spanish). Faculty of Law, University of Buenos Aires. Retrieved 2006-09-29. - Díaz, Eduardo Acevedo (1890). "Notas". Nativas (in Spanish). Retrieved 2006-09-29. - "Yaguareté – La Verdadera Fiera". RED Yaguareté (in Spanish). Retrieved 2006-09-27. - "panther", Oxford English Dictionary, 2nd edition - "Panther". Online Etymology Dictionary. Douglas Harper. Retrieved 2006-10-26. - "ounce" 2, Oxford English Dictionary, 2nd edition - Johnson, W. E., Eizirik, E., Pecon-Slattery, J., Murphy, W. J., Antunes, A., Teeling, E. and O'Brien, S. J. (2006). "The Late Miocene radiation of modern Felidae: A genetic assessment". Science 311 (5757): 73–7. doi:10.1126/science.1122277. PMID 16400146. - Turner, A. (1987). "New fossil carnivore remains from the Sterkfontein hominid site (Mammalia: Carnivora)". Annals of the Transvaal Museum 34: 319–347. ISSN 0041-1752. - Yu, L. Zhang, Y. P. (2005). "Phylogenetic studies of pantherine cats (Felidae) based on multiple genes, with novel application of nuclear beta-fibrinogen intron 7 to carnivores". Molecular Phylogenetics and Evolution 35 (2): 483–95. doi:10.1016/j.ympev.2005.01.017. PMID 15804417. - Johnson, W. E. and Obrien, S. J. (1997). "Phylogenetic reconstruction of the Felidae using 16S rRNA and NADH-5 mitochondrial genes". Journal of Molecular Evolution 44: S98–116. doi:10.1007/PL00000060. PMID 9071018. - Janczewski, Dianne N.; Modi, William S.; Stephens, J. Claiborne and O'Brien, Stephen J. (1996). "Molecular Evolution of Mitochondrial 12S RNA and Cytochrome b Sequences in the Pantherine Lineage of Felidae". Molecular Biology and Evolution 12 (4): 690–707. PMID 7544865. Retrieved 2006-08-06. - Eizirik E.; Kim, J. H., Menotti-Raymond M., Crawshaw P. G., Jr; O'Brien, S. J., Johnson, W. E. (2001). "Phylogeography, population history and conservation genetics of jaguars (Panthera onca, Mammalia, Felidae)". Molecular Ecology 10 (1): 65–79. doi:10.1046/j.1365-294X.2001.01144.x. PMID 11251788. - "Spirits of the Jaguar". PBS online – Nature. Retrieved 2011-11-11. - Seymour, K.L. (1989). "Panthera onca" (PDF). Mammalian Species 340 (340): 1–9. doi:10.2307/3504096. JSTOR 3504096. Retrieved 2009-12-27. - Nowak, Ronald M. (1999). Walker's Mammals of the World (6th ed.). Baltimore: Johns Hopkins University Press. ISBN 0-8018-5789-9. - Larson, Shawn E. (1997). "Taxonomic re-evaluation of the jaguar". Zoo Biology 16 (2): 107. doi:10.1002/(SICI)1098-2361(1997)16:2<107::AID-ZOO2>3.0.CO;2-E. - Ruiz-Garcia, M.; Payan, E; Murillo, A. and Alvarez, D. (2006). "DNA microsatellite characterization of the jaguar (Panthera onca) in Colombia" (PDF). Genes & Genetic Systems 81 (2): 115–127. doi:10.1266/ggs.81.115. Retrieved 2011-11-11. - Baker, Taxonomy, pp. 5–7. - "Brazil nature tours, Pantanal nature tours, Brazil tours, Pantanal birding tours, Amazon tours, Iguazu Falls tours, all Brazil tours". Focustours.com. Retrieved 2007-02-28. - Burnien, David and Wilson, Don E. (2001). Animal: The Definitive Visual Guide to the World's Wildlife. New York City: Dorling Kindersley. ISBN 0-7894-7764-5. - Boitani, Luigi (1984). Simon and Schuster's Guide to Mammals. Simon & Schuster. ISBN 0-671-43727-5. - Nowak, Ronald M (1999). Walker's Mammals of the World 2. JHU Press. p. 831. ISBN 0-8018-5789-9. - "All about Jaguars: ECOLOGY". Wildlife Conservation Society. Retrieved 2006-08-11. - Rodrigo Nuanaez, Brian Miller, and Fred Lindzey (2000). "Food habits of jaguars and pumas in Jalisco, Mexico". Journal of Zoology 252 (3): 373. Retrieved 2006-08-08. - "Jaguar Fact Sheet". Jaguar Species Survival Plan. American Zoo and Aquarium Association. Retrieved 2006-08-14. - Nowell, K. and Jackson, P., ed. (1996). "Panthera Onca". Wild Cats. Status Survey and Conservation Action Plan. Gland, Switzerland: IUCN/SSC Cat Specialist Group. IUCN. pp. 118–122. Retrieved 2011-11-11. - "Search for the Jaguar". National Geographic Specials. Kentucky Educational Television. 2003. Retrieved 2012-03-19. - McGrath, Susan (August 2004). Top Cat. National Audubon Society. Retrieved 2009-12-02. - "Jaguar (panthera onca)". Our animals. Akron Zoo. Retrieved 2006-08-11. - Dinets, Vladmir. "First documentation of melanism in the jaguar (Panthera onca) from northern Mexico". Retrieved 2006-09-29. - Meyer, John R. (1994). "Black jaguars in Belize?: A survey of melanism in the jaguar, Panthera onca". Belize Explorer Group. biological-diversity.info. - Baker, Reproduction, pp. 28–38. - Morato, R. G.; Vaz Guimaraes, M; A; B.; Ferriera, F.; Nascimento Verreschi, I. T. and Renato Campanarut Barnabe (1999). "Reproductive characteristics of captive male jaguars". Brazilian Journal of Veterinary Research and Animal Science 36 (5). Retrieved 2011-11-11. - Baker, Natural History and Behavior, pp. 8–16. - <Please add first missing authors to populate metadata.> (Spring 2006). "Jaguars: Magnificence in the Southwest" (PDF). Newsletter (Southwest Wildlife Rehabilitation & Educational Foundation). Retrieved 2009-12-06. - Schaller, George B. and Crawshaw, Peter Gransden, Jr. (1980). "Movement Patterns of Jaguar". Biotropica 12 (3): 161–168. doi:10.2307/2387967. JSTOR 2387967. - Rabinowitz, A. R., Nottingham, B. G., Jr (1986). "Ecology and behaviour of the Jaguar (Panthera onca) in Belize, Central America". Journal of Zoology 210 (1): 149. doi:10.1111/j.1469-7998.1986.tb03627.x. Overlapping male ranges are observed in this study in Belize. Note the overall size of ranges is about half of normal. - An Error Occurred Setting Your User Cookie - Weissengruber, G. E.; Forstenpointner, G.; Peters, G.; Kübber-Heiss, A.; Fitch, W. T. (2002). "Hyoid apparatus and pharynx in the lion (Panthera leo), jaguar (Panthera onca), tiger (Panthera tigris), cheetah (Acinonyx jubatus) and domestic cat (Felis silvestris f. catus)". Journal of Anatomy 201 (3): 195–209. doi:10.1046/j.1469-7580.2002.00088.x. PMC 1570911. PMID 12363272. - Hast, M. H. (1989). "The larynx of roaring and non-roaring cats". Journal of Anatomy 163: 117–121. PMC 1256521. PMID 2606766. - Emmons, Louise H. (1987). "Comparative feeding ecology of felids in a neotropical rainforest". Behavioral Ecology and Sociobiology 20 (4): 271. doi:10.1007/BF00292180. - Otfinoski, Steven (2010). Jaguars. Marshall Cavendish. p. 18. ISBN 978-0-7614-4839-6. Retrieved 2011-03-16. - "Jaguar". Kids' Planet. Defenders of Wildlife. Retrieved 2006-09-23. - Schaller, G. B. and Vasconselos, J. M. C. (1978). "Jaguar predation on capybara". Z. Saugetierk 43: 296–301. Retrieved 2011-11-11. - Travellers' Wildlife Guide to Costa Rica by Les Beletsky. Interlink Publishing Group (2004), ISBN# 1566565294 - The animal kingdom: based upon the writings of the eminent naturalists Audubon, Wallace, Brehm, Wood, and Others, edited by Hugh Craig. Trinity College (1897), New York. - "Determination That Designation of Critical Habitat Is Not Prudent for the Jaguar". Federal Register Environmental Documents. 2006-07-12. Retrieved 2006-08-30. - Baker, Hand-rearing, pp. 62–75 (table 5). - Baker, Nutrition, pp. 55–61. - "Jaguar". Catsurvivaltrust.org. 2002-03-09. Retrieved 2009-03-08. - "Jaguar: The Western Hemisphere's Top Cat". Planeta. February 2008. Retrieved 2009-03-08. - Sanderson, E. W.; Redford, K. H.; Chetkiewicz, C-L. B.; Medellín, R. A.; Rabinowitz, A. R.: Robinson, J. G. and Taber, A. B. (2002). "Planning to Save a Species: the Jaguar as a Model" (PDF). Conservation Biology 16 (1): 58. doi:10.1046/j.1523-1739.2002.00352.x. Retrieved 2011-11-11. Detailed analysis of present range and terrain types provided here. - Mccain, Emil B. and Childs, Jack L. (2008). "Evidence of resident Jaguars (Panthera onca) in the Southwestern United States and the Implications for Conservation". Journal of Mammalogy 89 (1): 1–10. doi:10.1644/07-MAMM-F-268.1. Retrieved 2011-11-11. - "Jaguar Management". Arizona Game and Fish Department. 2009. Retrieved 2006-08-08. - "Arizona Game and Fish collars first wild jaguar in United States". Readitnews.com. Retrieved 2009-03-08. - Hock, Heather (2009-03-02). "Illness forced vets to euthuanize recaptured jaguar". Azcentral.com. Retrieved 2009-03-08. - "Addressing the Impacts of Border Security Activities On Wildlife and Habitat in Southern Arizona: STAKEHOLDER RECOMMENDATIONS" (PDF). Wildlands Project. Archived from the original on 11 July 2007. Retrieved 2008-11-03. - "Jaguars". The Midwestern United States 16,000 years ago. Illinois State Museum. Retrieved 2006-08-20. - "On the hunt for the big cat that refuses to die". Sydney Morning Herald. 2010-06-20. Retrieved 2011-11-11. - "Jaguar (Panthera Onca)". Phoenix Zoo. Retrieved 2006-08-30. - "Structure and Character: Keystone Species". mongabay.com. Rhett Butler. Retrieved 2006-08-30. - Wright, S. J.; Gompper, M. E.; DeLeon, B. (1994). "Are large predators keystone species in Neotropical forests? The evidence from Barro Colorado Island". Oikos 71 (2): 279–294. doi:10.2307/3546277. JSTOR 3546277. Archived from the original on 12 October 2007. Retrieved 2011-11-11. - Iriarte, J. A.; Franklin, W. L.; Johnson, W. E. and Redford, K. H. (1990). "Biogeographic variation of food habits and body size of the America puma". Oecologia 85 (2): 185. doi:10.1007/BF00319400. - Brakefield, T. (1993). Big Cats: Kingdom of Might. ISBN 0-89658-329-5. - Weber, William; Rabinowitz, Alan (August 1996). "A Global Perspective on Large Carnivore Conservation" (PDF). Conservation Biology 10 (4): 1046–1054. doi:10.1046/j.1523-1739.1996.10041046.x. Retrieved 2009-12-17. - "Jaguar Refuge in the Llanos Ecoregion". World Wildlife Fund. Retrieved 2006-09-01. - "Glossary". Sonoran Desert Conservation Plan: Kids. Pima County Government. Retrieved 2006-09-01. - Baker, Protection and Population Status, p. 4. - Soisalo, M. K. and Cavalcanti, S. M. C. (2006). "Estimating the density of a jaguar population in the Brazilian Pantanal using camera-traps and capture–recapture sampling in combination with GPS radio-telemetry". Biological Conservation 129 (4): 487. doi:10.1016/j.biocon.2005.11.023. Retrieved 2006-08-08. - "Path of the jaguars project". Ngm.nationalgeographic.com. March 2009. Retrieved 2010-04-02. - Christie, Bob (2011-12-01). "2 Rare Jaguar Sightings in Southern Arizona Excite Conservationists, State Wildlife Officials". Associated Press. Retrieved 2011-12-04. - Full text of "The writings of Thomas Jefferson" - Merriam, C. Hart (1919). "Is the Jaguar Entitled to a Place in the California Fauna?". Journal of Mammalogy 1: 38–40. - Pavlik, Steve (2003). "Rohonas and Spotted Lions: The Historical and Cultural Occurrence of the Jaguar, Panthera onca, among the Native Tribes of the American Southwest". Wicazo Sa Review 18 (1): 157–175. doi:10.1353/wic.2003.0006. JSTOR 1409436. - Daggett, Pierre M. and Henning, Dale R. (1974). "The Jaguar in North America". American Antiquity 39 (3): 465–469. doi:10.2307/279437. JSTOR 279437. - Will Rizzo (December 2005). "Return of the Jaguar?". Smithsonian Magazine. Retrieved 2011-11-23. - Davis, Tony and Steller, Tim (2011-11-22). "Jaguar seen in area of Cochise". Arizona Daily Star. Retrieved 2011-11-23. - Davis, Tony (November 25, 2012). "Jaguar photo taken near Rosemont". azstarnet.com. Arizona Daily Star. Retrieved December 1, 2012. - Matlock, Staci (2008-01-17). "Jaguar recovery efforts lack support from federal agency". The New Mexican. Retrieved 2011-11-28. - Susan H. Greenberg (2012-08-21). "Kitty Corner: Jaguars Win Critical Habitat in U.S.". Scientific American. Retrieved 2012-08-25. - Museo Arqueologico Rafael Larco Herrera (1997). In Berrin, Katherine. The Spirit of Ancient Peru: Treasures from the Museo Arqueologico Rafael Larco Herrera. New York City: Thames and Hudson. ISBN 0-500-01802-2. - Bulliet, Richard W. et al. (2010). The Earth and Its Peoples: A Global History. Cengage Learning. pp. 75–. ISBN 978-1-4390-8476-2. Retrieved 11 December 2011. - Lockard, Craig A. (2010). Societies, Networks, and Transitions, Volume I: To 1500: A Global History. Cengage Learning. pp. 215–. ISBN 978-1-4390-8535-6. Retrieved 11 December 2011. - Christenson, Allen J. (2007). Popol vuh: the sacred book of the Maya. University of Oklahoma Press. pp. 196–. ISBN 978-0-8061-3839-8. Retrieved 11 December 2011. - "Guyana". RBC Radio. Retrieved 2011-11-11. - Gutterman, D. (2008-07-26). "Amazonas Department (Colombia)". Fotw.net. Retrieved 2010-04-02. - Levi-Strauss, Claude (2004) . O Cru e o Cozido. São Paulo: Cosac & Naify. Retrieved 2011-11-11. - Welch, Paula. "Cute Little Creatures: Mascots Lend a Smile to the Games". la84foundation.org. Retrieved 2011-11-11. - Baker, W. K., Jr. et al.. In Law, Christopher. Guidelines for Captive Management of Jaguars. Jaguar Species Survival Plan. American Zoo and Aquarium Association. Retrieved 2011-11-11. Further reading - Brown, David, and Carlos A. López González (2001). Borderland Jaguars. University of Utah Press. ISBN 978-0-87480-696-0.
http://eol.org/pages/328606/details
13
60
P l a n e G e o m e t r y An Adventure in Language and Logic Here are the first principles of plane geometry -- the Definitions, Postulates, and Axioms or Common Notions -- followed by a brief commentary. 11. An angle is the inclination to one another of two straight lines that meet. 12. The point at which two lines meet is called the vertex of the angle. 13. If a straight line that stands on another straight line makes the adjacent angles equal, then each of those angles is called a right angle; and the straight line that stands on the other is called a perpendicular to it. 14. An acute angle is less than a right angle. An obtuse angle is greater than a right angle. 15. Angles are complementary (or complements of one another) if, together, they equal a right angle. Angles are supplementary (or supplements of one another) if together they equal two right angles. 16. Rectilinear figures are figures bounded by straight lines. A triangle is bounded by three straight lines, a quadrilateral by four, and a polygon by more than four straight lines. 17. A square is a quadrilateral in which all the sides are equal, and all the angles are right angles. 18. An equilateral triangle has three equal sides. An isosceles triangle has two equal sides. A scalene (or oblique) triangle has three unequal sides. 19. The vertex angle of a triangle is the angle opposite the base. 10. The height of a triangle is the straight line drawn from the vertex perpendicular to the base. 11. A right triangle is a triangle that has a right angle. 12. Figures are congruent when, if one of them were placed on the other, they would exactly coincide. (Congruent figures are thus equal to one another in all respects.) 13. Parallel lines are straight lines that are in the same plane and do not meet, no matter how far extended in either direction. 14. A parallelogram is a quadrilateral whose opposite sides are parallel 15. A circle is a plane figure bounded by one line, called the circumference, such that all straight lines drawn from a certain point within the figure to the circumference, are equal to one another. 16. And that point is called the center of the circle. 17. A diameter of a circle is a straight line through the center and terminating in both directions on the circumference. A straight line from the center to the circumference is called a radius; plural, radii. 1. Grant the following: 1. To draw a straight line from any point to any point. 2. To extend a straight line for as far as we please in a straight line. 3. To draw a circle whose center is the extremity of any straight line, and whose radius is the straight line itself. 4. All right angles are equal to one another. 5. If a straight line that meets two straight lines makes the interior angles on the same side less than two right angles, then those two straight lines if extended, will meet on that same side. (That is, if angles 1 and 2 together are less than two right angles, then the straight lines AB, CD if extended, will meet on that same side; which is to say, AB, CD are not parallel.) Axioms or Common Notions 1. Things that are equal to the same thing are equal to one another. 2. If equals are added to equals, the wholes (the "sums") will be 3. If equals are subtracted from equals, what remains will be equal. 4. Things that coincide with one another are equal to one another. 5. The whole is greater than the part. 6. Equal magnitudes have equal parts; equal halves, equal thirds, 2. and so on. Commentary on the Definitions A definition regulates how a word will be used. Therefore it is never a question of whether a definition is true or false. A definition is required only to be understood. In a very significant sense, we do not need definitions. Because rather than call a figure a "triangle," we could just as well call it "a figure bounded by three straight lines." The definition eliminates the wordiness. Definitions find their greatest importance in proofs. Because in order to prove that a triangle is equilateral, for example, we must prove that the figure satisfies the definition of "equilateral." We say that definitions are reversible. This means that a definition is equivalent to an if and only if sentence. For example, if a triangle is equilateral, then all its sides are equal. And conversely, if all the sides of a triangle are equal, then it is called equilateral. Note that the definition of a right angle says nothing about measurement, about 90°. Plane geometry is not the study of how to apply arithmetic to figures. In geometry we are concerned only with what we can see and reason directly, not through computation. A most basic form of knowledge is that two magnitudes are simply equal -- not that they are both 90° or 9 meters. How can we know when things are equal? That is one of the main questions of geometry. The definition of a circle provides our first way of knowing that two straight lines could be equal. Because if we know that a figure is a circle, then we would know that any two radii are equal. (Definitions 15 and 17.) We have chosen not to define a "point," although Euclid does. ("A point is that which has no part, that is, no magnitude or size.") And we have not defined a "line," although again Euclid does. ("A line is length without breadth.") Since there is never occasion to prove that something is a point or a line, a definition of one is not logically required. Nevertheless, with regard to a point, it is important to understand that it is the idea of position only. "Here" To suppose that lines are composed of points, is a serious misunderstanding. Commentary on the Postulates The figures of geometry -- the triangles, squares, circles -- exist primarily in our inner, mental space. They are ideas. We give them incarnation by being able to draw them on paper. The fact that we can draw a figure qualifies as what we call its mathematical existence. For we may not simply assume that what we have defined, such as a "triangle" or a "circle," physically exists. The first three Postulates narrowly set down what we are permitted to draw. Everything else we must prove. Each of those Postulates is therefore a "problem" -- a construction -- that can be accomplished. The instruments of construction are straightedge and compass only. Postulate 1 grants that whatever we draw with a straightedge is a straight line. Postulate 3 grants that the figure we draw with a compass is a circle. Note, finally, that the word all, as in "all right angles" or "all straight lines," refer to all that exist, that is, that have actually been drawn. Geometry -- at any rate Euclid's -- is never just in our mind. Commentary on the Axioms or Common Notions The distinction between a postulate and an axiom is that a postulate is about the specific subject at hand, in this case, geometry; while an axiom is more generally true; it is in fact a common notion. Yet each has the same logical function, which is to authorize the proofs that follow. Implicit in these Axioms is our very understanding of equal versus unequal, which is: Two magnitudes of the same kind are either equal or one of them is greater. So, these Axioms, together with the Definitions and Postulates, are the first principles from which our theory of figures will be deduced. Please "turn" the page and do some Problems. Continue on to Proposition 1.
http://www.salonhogar.net/themathpage/aBookI/first.htm
13
105
A field is a physical quantity that has a value for each point in space and time. For example, in a weather forecast, the wind velocity during a day over a country is described by assigning a vector to each point in space. Each vector represents the direction of the movement of air at that point. As the day progresses, the directions in which the vectors point change as the directions of the wind change. A field can be classified as a scalar field, a vector field, a spinor field, or a tensor field according to whether the value of the field at each point is a scalar, a vector, a spinor (e.g., a Dirac electron) or, more generally, a tensor, respectively. For example, the Newtonian gravitational field is a vector field: specifying its value at a point in spacetime requires three numbers, the components of the gravitational field vector at that point. Moreover, within each category (scalar, vector, tensor), a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. A field may be thought of as extending throughout the whole of space. In practice, the strength of every known field has been found to diminish with distance to the point of being undetectable. For instance, in Newton's theory of gravity, the gravitational field strength is inversely proportional to the square of the distance from the gravitating object. Therefore the Earth's gravitational field quickly becomes undetectable (on cosmic scales). Defining the field as "numbers in space" shouldn't detract from the idea that it has physical reality. “It occupies space. It contains energy. Its presence eliminates a true vacuum.” The field creates a "condition in space" such that when we put a particle in it, the particle "feels" a force. If an electrical charge is accelerated, the effects on another charge do not appear instantaneously. The first charge feels a reaction force, picking up momentum, but the second charge feels nothing until the influence, traveling at the speed of light, reaches it and gives it the momentum. Where is the momentum before the second charge moves? By the law of conservation of momentum it must be somewhere. Physicists have found it of "great utility for the analysis of forces" to think of it as being in the field. This utility leads to physicists believing that electromagnetic fields actually exist, making the field concept a supporting paradigm of the entire edifice of modern physics. That said, John Wheeler and Richard Feynman have entertained Newton's pre-field concept of action at a distance (although they put it on the back burner because of the ongoing utility of the field concept for research in general relativity and quantum electrodynamics). "The fact that the electromagnetic field can possess momentum and energy makes it very real... a particle makes a field, and a field acts on another particle, and the field has such familiar properties as energy content and momentum, just as particles can have". The first field to appear in physics was the gravitational field. To Isaac Newton his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. In the eighteenth century, a new entity was devised to simplify the bookkeeping of all these gravitational forces. This entity, the gravitational field, gave at each point in space the total gravitational force on an object with unit mass at that point. This did not change the physics in any way: it did not matter if you calculated all the gravitational forces on an object individually and then added them together, or if you first added all the contributions together as a gravitational field and then applied it to an object. The development of the independent concept of a field truly began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became much more natural to take the field approach and express these laws in terms of electric and magnetic fields; in 1849 Michael Faraday became the first to coin the term "field". The independent nature of the field became more apparent with James Clerk Maxwell's discovery that waves in these fields propagated at a finite speed. Consequently, the forces on charges and currents no longer just depended on the positions and velocities of other charges and currents at the same time, but also on their positions and velocities in the past. Maxwell, at first, did not adopt the modern concept of a field as fundamental entity that could independently exist. Instead he supposed that the electromagnetic field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. A direct consequence of this hypothesis was that the observed velocity of the electromagnetic waves should depend on the velocity of the observer with respect to the aether. Despite much effort, no experimental evidence of such an effect was ever found; the situation was resolved by the introduction of the theory of special relativity by Albert Einstein in 1905. This theory changed the way the viewpoints of moving observers should be related to each other in such a way that velocity of electromagnetic waves in Maxwell's theory would be the same for all observers. By doing away with the need for a background medium, this development opened the way for physicists to start thinking about fields as truly independent entities. In the late 1920s, the new rules of quantum mechanics were first applied to the electromagnetic fields. In 1927, Paul Dirac used quantum fields to successfully explain how the decay of an atom to lower quantum state lead to the spontaneous emission of a photon, the quantum of the electromagnetic field. This was soon followed by the realization (following the work of Pascual Jordan, Eugene Wigner, Werner Heisenberg, and Wolfgang Pauli) that all particles including electrons and protons could be understood as the quanta of some quantum field, elevating fields to the most fundamental objects in nature. Classical fields There are several examples of classical fields. Classical field theories remain useful wherever quantum properties do not arise, and can be active areas of research. Elasticity of materials, fluid dynamics and Maxwell's equations are cases in point. Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described. Newtonian gravitation Any massive body M has a gravitational field g which describes its influence on other massive bodies. The gravitational field of M at a point r in space is found by determining the force F that M exerts on a small test mass m located at r, and then dividing by m: Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence the behavior of M. The experimental observation that inertial mass and gravitational mass are equal to unprecedented levels of accuracy leads to the identification of the gravitational field strength as identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity. Michael Faraday first realized the importance of a field as a physical object, during his investigations into magnetism. He realized that electric and magnetic fields are not only fields of force which dictate the motion of particles, but also have an independent physical reality because they carry energy. These ideas eventually led to the creation, by James Clerk Maxwell, of the first unified field theory in physics with the introduction of equations for the electromagnetic field. The modern version of these equations is called Maxwell's equations. A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E so that F = qE. Using this and Coulomb's law tells us that the electric field due to a single charged particle as The electric field is conservative, and hence can be described by a scalar potential, V(r): A steady current I flowing along a path ℓ will exert a force on nearby charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r): In general, in the presence of both a charge density ρ(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to ρ and J. Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ρ and J,[note 1] and from there the electric and magnetic fields are determined via the relations At the end of the 19th century, the electromagnetic field was understood as a collection of two vector fields in space. Nowadays, one recognizes this as a single antisymmetric 2nd-rank tensor field in spacetime. Gravitation in general relativity Einstein's theory of gravity, called general relativity, is another example of a field theory. Here the principal field is the metric tensor, a symmetric 2nd-rank tensor field in spacetime. This replaces Newton's law of universal gravitation. Waves as fields Waves can be constructed as physical fields, due to their finite propagation speed and causal nature when a simplified physical model of an isolated closed system is set[clarification needed]. They are also subject to the inverse-square law. For electromagnetic waves, there are optical fields, and terms such as near- and far-field limits for diffraction. In practice, though the field theories of optics are superseded by the electromagnetic field theory of Maxwell. Quantum fields It is now believed that quantum mechanics should underlie all physical phenomena, so that a classical field theory should, at least in principle, permit a recasting in quantum mechanical terms; success yields the corresponding quantum field theory. For example, quantizing classical electrodynamics gives quantum electrodynamics. Quantum electrodynamics is arguably the most successful scientific theory; experimental data confirm its predictions to a higher precision (to more significant digits) than any other theory. The two other fundamental quantum field theories are quantum chromodynamics and the electroweak theory. In quantum chromodynamics, the color field lines are coupled at short distances by gluons, which are polarized by the field and line up with it. This effect increases within a short distance (around 1 fm from the vicinity of the quarks) making the color force increase within a short distance, confining the quarks within hadrons. As the field lines are pulled together tightly by gluons, they do not "bow" outwards as much as an electric field between electric charges. These three quantum field theories can all be derived as special cases of the so-called standard model of particle physics. General relativity, the Einsteinian field theory of gravity, has yet to be successfully quantized. However an extension, thermal field theory, deals with quantum field theory at finite temperatures, something seldom considered in quantum field theory. As above with classical fields, it is possible to approach their quantum counterparts from a purely mathematical view using similar techniques as before. The equations governing the quantum fields are in fact PDEs (more precisely, relativistic wave equations (RWEs)). Thus one can speak of Yang-Mills, Dirac, Klein-Gordon and Schroedinger fields as being solutions to their respective equations. A possible problem is that these RWEs can deal with complicated mathematical objects with exotic algebraic properties (e.g. spinors are not tensors, so may need calculus over spinor fields), but these in theory can still be subjected to analytical methods given appropriate mathematical generalization. Some theories, such as the Batalin–Vilkovisky formalism, contains both fields and antifields. Field theory A field theory is a physical theory that describes how one or more physical fields interact with matter. Field theory usually refers to a construction of the dynamics of a field, i.e. a specification of how a field changes with time or with respect to other independent physical variables on which the field depends. Usually this is done by writing a Lagrangian or a Hamiltonian of the field, and treating it as the classical mechanics (or quantum mechanics) of a system with an infinite number of degrees of freedom. The resulting field theories are referred to as classical or quantum field theories. It is possible to construct simple fields without any a priori knowledge of physics using only mathematics from several variable calculus, potential theory and partial differential equations. For example, scalar PDEs might consider quantities such as amplitude, density and pressure fields for the wave equation and fluid dynamics; temperature/concentration fields for the heat/diffusion equations. Outside of physics proper (e.g., radiometry and computer graphics), there are even light fields. All these previous examples are scalar fields. Similarly for vectors, there are vector PDEs for displacement, velocity and vorticity fields in (applied mathematical) fluid dynamics, but vector calculus may now be needed in addition, being calculus over vector fields (as are these three quantities, and those for vector PDEs in general). More generally problems in continuum mechanics may involve for example, directional elasticity (from which comes the term tensor, derived from the Latin word for stretch), complex fluid flows or anisotropic diffusion, which are framed as matrix-tensor PDEs, and then require matrices or tensor fields, hence matrix or tensor calculus. It should be noted that the scalars (and hence the vectors, matrices and tensors) can be real or complex as both are fields in the abstract-algebraic/ring-theoretic sense. Symmetries of fields A convenient way of classifying a field (classical or quantum) is by the symmetries it possesses. Physical symmetries are usually of two types: Spacetime symmetries Fields are often classified by their behaviour under transformations of spacetime. The terms used in this classification are — - scalar fields (such as temperature) whose values are given by a single variable at each point of space. This value does not change under transformations of space. - vector fields (such as the magnitude and direction of the force at each point in a magnetic field) which are specified by attaching a vector to each point of space. The components of this vector transform between themselves as usual under rotations in space. - tensor fields, (such as the stress tensor of a crystal) specified by a tensor at each point of space. The components of the tensor transform between themselves as usual under rotations in space. - spinor fields (such as the Dirac spinor) arise in quantum field theory to describe particles with spin. Internal symmetries Fields may have internal symmetries in addition to spacetime symmetries. For example, in many situations one needs fields which are a list of space-time scalars: (φ1, φ2, ... φN). For example, in weather prediction these may be temperature, pressure, humidity, etc. In particle physics, the color symmetry of the interaction of quarks is an example of an internal symmetry of the strong interaction, as is the isospin or flavour symmetry. If there is a symmetry of the problem, not involving spacetime, under which these components transform into each other, then this set of symmetries is called an internal symmetry. One may also make a classification of the charges of the fields under internal symmetries. Statistical field theory Statistical field theory attempts to extend the field-theoretic paradigm toward many body systems and statistical mechanics. As above, it can be approached by the usual infinite number of degrees of freedom argument. Much like statistical mechanics has some overlap between quantum and classical mechanics, statistical field theory has links to both quantum and classical field theories, especially the former with which it shares many methods. One important example is mean field theory. Continuous random fields Classical fields as above, such as the electromagnetic field, are usually infinitely differentiable functions, but they are in any case almost always twice differentiable. In contrast, generalized functions are not continuous. When dealing carefully with classical fields at finite temperature, the mathematical methods of continuous random fields are used, because thermally fluctuating classical fields are nowhere differentiable. Random fields are indexed sets of random variables; a continuous random field is a random field that has a set of functions as its index set. In particular, it is often mathematically convenient to take a continuous random field to have a Schwartz space of functions as its index set, in which case the continuous random field is a tempered distribution. We can think about a continuous random field, in a (very) rough way, as an ordinary function that is almost everywhere, but such that when we take a weighted average of all the infinities over any finite region, we get a finite result. The infinities are not well-defined; but the finite values can be associated with the functions used as the weight functions to get the finite values, and that can be well-defined. We can define a continuous random field well enough as a linear map from a space of functions into the real numbers. Mathematics of fields The continuum view (hence the term "field") can be approached by letting the system have an infinite number of degrees of freedom. The dimension of a vector ordinary differential equation is simply the dimension of the vector dependent variable, or the vector function. In this sense partial differential equations so can be thought of as (coupled) ODEs of infinite dimension (a mathematical interpretation of the degrees of freedom argument). In addition, vector fields called slope fields are important tools in analyzing results in ODEs (see also phase plane). The exact nature of the object (and its arguments) in the differential equation (e.g. real scalar, complex matrix, Euclidean vector or four vector etc.) determines the kind of analysis (in our examples - calculus of a real single variable, a complex matrix and over real vector fields) needed. Other than partial differential equations, other parts of (classical) real analysis and complex analysis were either inspired by or have techniques applied (or both) in field theory. Examples of such areas are spectral theory and harmonic analysis (vibrations and waves) or the self-descriptive potential theory, all now mathematical subjects in their own right. However perhaps the most prominent examples are variational calculus (given its connections to the Lagrangian and Hamiltonian formalisms) and multivariable calculus with its generalizations differential geometry - including tensor calculus, and gauge theory - and its close relative differential topology. See also - John Gribbin (1998). Q is for Quantum: Particle Physics from A to Z. London: Weidenfeld & Nicolson. p. 138. ISBN 0-297-81752-3. - John Archibald Wheeler (1998). Geons, Black Holes, and Quantum Foam: A Life in Physics. London: Norton. p. 163. - Richard P. Feynman (1963). Feynman's Lectures on Physics, Volume 1. Caltech. pp. 2–4. - Weinberg, Steven (1977). "The Search for Unity: Notes for a History of Quantum Field Theory". Daedalus 106 (4): 17–35. JSTOR 20024506. - Kleppner, David; Kolenkow, Robert. An Introduction to Mechanics. p. 85. - Parker, C.B. (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). Mc Graw Hill. ISBN 0-07-051400-3. - M. Mansfield, C. O’Sullivan (2011). Understanding Physics (4th ed.). John Wiley & Sons. ISBN 978-0-47-0746370. - Griffiths, David. Introduction to Electrodynamics (3rd ed.). p. 326. - Wangsness, Roald. Electromagnetic Fields (2nd ed.). p. 469. - J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. ISBN 0-7167-0344-0. - I. Ciufolini and J.A. Wheeler (1995). Gravitation and Inertia. Princeton Physics Series. ISBN 0-691-03323-4. - Peskin, Michael E.; Schroeder, Daniel V. (1995). An Introduction to Quantum Fields. Westview Press. p. 198. ISBN 0-201-50397-2. Also see precision tests of QED. - R. Resnick, R. Eisberg (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd ed.). John Wiley & Sons. p. 684. ISBN 978-0-471-87373-0. - Giachetta, G., Mangiarotti, L., Sardanashvily, G. (2009) Advanced Classical Field Theory. Singapore: World Scientific, ISBN 978-981-283-895-7 (arXiv: 0811.0331v2) - Nonlinear Dispersive Equations: Local And Global Analysis, Terence Tao. Further reading - Landau, Lev D. and Lifshitz, Evgeny M. (1971). Classical Theory of Fields (3rd ed.). London: Pergamon. ISBN 0-08-016019-0. Vol. 2 of the Course of Theoretical Physics.
http://en.wikipedia.org/wiki/Quantum_field
13
60
Bonds in general Atoms with multiple valences Radicals or polyatomic ions Acids of some common polyatomic ions Writing ionic compound formulas Binary covalent compounds Checklist for writing compounds More on bonds Continuum between ionic and covalent Shapes around an atom Bonding forces in water IONIC AND COVALENT BONDS A bond is an attachment among atoms. Atoms may be held together for any of several reasons, but all bonds have to with the electrons, particularly the outside electrons, of atoms. There are bonds that occur due to sharing electrons. There are bonds that occur due to a full electrical charge difference attraction. There are bonds that come about from partial charges or the position or shape of electrons about an atom. But all bonds have to do with electrons. Since chemistry is the study of elements, compounds, and how they change, it might be said that chemistry is the study of electrons. If we study the changes brought about by moving protons or neutrons, we would be studying nuclear physics. In chemical reactions the elements do not change from one element to another, but are only rearranged in their attachments. A compound is a group of atoms with an exact number and type of atoms in it arranged in a specific way. Every bit of that material is exactly the same. Exactly the same elements in exactly the same proportions are in every bit of the compound. Water is an example of a compound. One oxygen atom and two hydrogen atoms make up water. Each hydrogen atom is attached to an oxygen atom by a bond. Any other arrangement is not water. If any other elements are attached, it is not water. H2O is the formula for that compound. This formula indicates that there are two hydrogen atoms and one oxygen atom in the compound. H2S is hydrogen sulfide. Hydrogen sulfide does not have the same types of atoms as water. It is a different compound. H2O2 is the formula for hydrogen peroxide. It might have the right elements in it to be water, but it does not have them in the right proportion. It is still not water. The word formula is also used to mean the smallest bit of any compound. A molecule is a single formula of a compound joined by covalent bonds. The Law of Constant Proportions states that a given compound always contains the same proportion by weight of the same elements. Back to the top of COMPOUNDS Some atoms, such as metals tend to lose electrons to make the outside ring or rings of electrons more stable and other atoms tend to gain electrons to complete the outside ring. An ion is a charged particle. Electrons are negative. The negative charge of the electrons can be offset by the positive charge of the protons, but the number of protons does not change in a chemical reaction. When an atom loses electrons it becomes a positive ion because the number of protons exceeds the number of electrons. Non-metal ions and most of the polyatomic ions have a negative charge. The non-metal ions tend to gain electrons to fill out the outer shell. When the number of electrons exceeds the number of protons, the ion is negative. The attraction between a positive ion and a negative ion is an ionic bond. Any positive ion will bond with any negative ion. They are not fussy. An ionic compound is a group of atoms attached by an ionic bond that is a major unifying portion of the compound. A positive ion, whether it is a single atom or a group of atoms all with the same charge, is called a cation, pronounced as if a cat were an ion. A negative ion is called an anion, pronounced as if Ann were an ion. The name of an ionic compound is the name of the positive ion (cation) first and the negative (anion) ion second. The valence of an atom is the likely charge it will take on as an ion. The names of the ions of metal elements with only one valence, such as the Group 1 or Group 2 elements, are the same as the names of the elements. A sodium atom that has lost the electron in its outside shell is a positive ion, sodium ion, Na+. A magnesium atom that has lost the two electrons in its outside shell is a plus two (double positive) magnesium ion, Mg++.The names of the ions of nonmetal elements (anions) develop an -ide on the end of the name of the element. Nonmetal atoms tend to GAIN electrons, so when a nonmetal atom collects an extra electron, it will become a negative ion. For instance, fluorine ion is fluoride, F -; oxygen ion is oxide, O =, a double negative ion because it gains TWO electrons; and iodine ion is iodide, I -. There are a number of elements, usually transition elements that have more than one valence and have a name for each ion, for instance, ferric ion is an iron ion with a positive three charge. Ferrous ion is an iron ion with a charge of plus two. There are a number of common groups of atoms that have a charge for the whole group. Such a group is called a polyatomic ion or radical. Chemtutor suggests it is best to learn by rote the list of polyatomic ions with their names, formulas and charges and the elements with more than one valence the same way. Back to the top of COMPOUNDS SOME ATOMS WITH MULTIPLE VALENCES. NOTE THERE ARE TWO COMMON NAMES FOR THE IONS. YOU SHOULD KNOW BOTH THE STOCK SYSTEM AND THE OLD SYSTEM NAMES. |Fe2+||iron II||ferrous||Fe3+||iron III||ferric | |Cu+||copper I||cuprous ||Cu2+ ||copper II||cupric| |Au+ ||gold I||aurous||Au3+ ||gold III||auric| |Sn2+||tin II||stannous||Sn4+||tin IV ||stannic| |Pb2+||lead II||plumbous ||Pb4+||lead IV||plumbic | II||chromous||Cr3+||chromium III ||chromic The ion names by the Stock system are pronounced, “copper one”, “copper two”, etc. Notice that the two most likely ions of an atom that has multiple valences have suffixes in the old system to identify them. The smallest of the two charges gets the “-ous” suffix, and the largest of the two charges has the “-ic” suffix. This leads to the amusing possibility of Saint Nickelous coming down your chimney. (Boo! Hiss!) SOME ATOMS WITH ONLY ONE COMMON VALENCE: - ALL GROUP 1 ELEMENTS ARE +1 - ALL GROUP 2 ELEMENTS ARE +2 - ALL GROUP 7 (HALOGEN) ELEMENTS ARE -1 WHEN IONIC - Oxygen and sulfur (GROUP 6) are -2 when ionic - Hydrogen is usually +1 - Al3+, Zn2+, and Ag+ Back to the top of COMPOUNDS RADICALS OR POLYATOMIC IONS The following radicals or polyatomic ions are groups of atoms of more than one kind of element attached by covalent bonds. They do not often come apart in ionic reactions. The charge on the radical is for the whole group of atoms as a unit. These are common radicals you should learn WITH THEIR CHARGE AND NAME. AMMONIUM - Do not confuse with NH3, AMMONIA GAS) - (NO3)- NITRATE (Do not confuse with NITRIDE (N3-) or NITRITE) - (NO2)- NITRITE (Do not confuse with (N3-) or NITRATE) - (C2H3O2)- ACETATE (NOTE - This is not the only way this may be written.) CHLORATE (Do not confuse with CHLORIDE (Cl- ) or CHLORITE) CHLORITE (Do not confuse with CHLORIDE (Cl- ) or CHLORATE) SULFITE (Do not confuse with (S2-) or SULFATE) SULFATE (Do not confuse with SULFIDE (S2-) or SULFITE) BISULFITE (or HYDROGEN SULFITE) PHOSPHATE (Do not confuse with P3-, PHOSPHIDE) - (HCO3)- BICARBONATE (or HYDROGEN CARBONATE) - (CO3)2- CARBONATE - (HPO4)2- HYDROGEN PHOSPHATE - (H2PO4)- DIHYDROGEN PHOSPHATE - (OH)- HYDROXIDE - (BO3)3- BORATE - (AsO4)3- ARSENATE - (C2O4)2- OXALATE - (CN)- CYANIDE - (MnO4)- PERMANGANATE Back to the top of COMPOUNDS ACIDS OF SOME COMMON POLYATOMIC IONS. These are written here with the parentheses around the polyatomic ions to show their origin. Usually these compounds are written without the parentheses, such as HNO3 or H2SO4. Note that the acids of polyatomic ions with a single negative charge only have one hydrogen. Polyatomic ions with two negative charges have two hydrogens. - H(OH) WATER (!) - H(NO3) NITRIC ACID - H(NO2) NITROUS ACID . - H(C2H3O2) ACETIC ACID - H2(CO3) CARBONIC ACID - H2(SO3) SULFUROUS - H2(SO4) SULFURIC ACID - H3(PO4) PHOSPHORIC - H2(CrO4) CHROMIC ACID - H3(BO3) BORIC ACID - H2(C2O4) OXALIC ACID Back to the top of COMPOUNDS WRITING IONIC COMPOUND FORMULAS In the lists above, the radicals and compounds have a small number after and below an element if there is more than one of that type of that atom. For instance, ammonium ions have one nitrogen atom and four hydrogen atoms in them. Sulfuric acid has two hydrogens, one sulfur, and four oxygens. Knowing the ions is the best way to identify ionic compounds and to predict how materials would join. People who do not know of the ammonium ion and the nitrate ion would have a difficult time seeing that NH4NO3 is ammonium nitrate. Chemtutor very highly recommends that you know all the above ions, complete with the valence or charge. One of the best ways to learn the ions is to write ionic compounds. Print the lists of ions and use them to write your compounds until you become more familiar with them. Chemtutor has the "Compound Worksheet," a page of compounds at the end of this section for your practice. You can print it and fold back the answers on the right side of the page. Let’s consider what happens in an ionic bond using electron configuration, the octet rule, and some creative visualization. A sodium atom has eleven electrons around it. The first shell has two electrons in an s subshell. The second shell is also full with eight electrons in an s and a p subshell. The outer shell has one lonely electron, as do the other elements in Group 1. This outside electron can be detached from the sodium atom, leaving a sodium ion with a single positive charge and an electron. A chlorine atom has seventeen electrons. Two are in the first shell, eight are in the second shell, and seven are in the outside shell. The outside shell is lacking one electron to make a full shell, as are all the elements of Group 7. When the chlorine atom collects another electron, the atom becomes a negative ion. The positive sodium ion missing an electron is attracted to the negative chloride ion with an extra electron. The symbol for a single unattached electron is a lower case e with a negative sign after and above it, e-. ½Cl2 + Na Cl + e- + Na+ Cl- + Na+ Any compound should have a net zero charge. The single positive charge of the sodium ion cancels the single negative charge of the chloride ion. The same idea would be for an ionic compound made of ions mof plus and minus two or plus and minus three, such as magnesium sulfate or aluminum phosphate. In magnesium sulfate, TWO electrons are changed from the magnesium to the sulfate, and, in aluminum phosphate, THREE electrons are changed from the aluminum to the phosphate. Mg(SO4) or MgSO4 Al3+ + (PO4)3- Al(PO4) or AlPO4 But what happens if the amount of charge does not match? Aluminum bromide has a cation that is triple positive and an anion that is single negative. The compound must be written with one aluminum and three bromide ions. AlBr3. Calcium phosphate has a double positive cation and a triple negative anion. If you like to think of it this way, the number of the charges must be switched to the other ion. Ca3(PO4)2. Note that there must be two phosphates in each calcium phosphate, so the parentheses must be included in the formula to indicate that. Each calcium phosphate formula (Ionic compounds do not make molecules.) has three calcium atoms, two phosphorus atoms, and eight oxygen atoms. There are a small number of ionic compounds that do not fit into the system for one reason or other. A good example of this is magnetite, an ore of iron, Fe3O4. The calculated charge on each iron atom would be +8/3, not a likely actual charge. The deviance from the system in the case of magnetite could be accounted for by a mixture of the common ferric and ferrous ions. Back to the top of COMPOUNDS BINARY COVALENT COMPOUNDS The word binary means that there are two types of atom in a compound. Covalent compounds are groups of atoms joined by covalent bonds. Binary covalent compounds are some of the very smallest compounds attached by covalent bonds. A covalent bond is the result of the sharing of a pair of electrons between two atoms. The chlorine molecule is a good example of the bond, even if it has only one type of atom. Chlorine gas, Cl2, has two chlorine atoms, each of which has seven electrons in the outside ring. Each atom contributes an electron to an electron pair that make the covalent bond. Each atom shares the pair of electrons. In the case of chlorine gas, the two elements in the bond have exactly the same pull on the electron pair, so the electrons are exactly evenly shared. The covalent bond can be represented by a pair of dots between the atoms, Cl:Cl, or a line between them, Cl-Cl. Sharing the pair of electrons makes each chlorine atom feel as if it has a completed outer shell of eight electrons. The covalent bond is much harder to break than an ionic bond. The ionic bonds of soluble ionic compounds come apart in water, but covalent bonds do not usually come apart in water. Covalent bonds make real molecules, groups of atoms that are genuinely attached to each other. Binary covalent compounds have two types of atom in them, usually non-metal atoms. Covalent bonds can come in double (sharing of two pairs of electrons) and triple (three pairs of electrons) bonds. |N2O||nitrous oxide||dinitrogen monoxide |NO||nitric oxide||nitrogen monoxide| |N2O3||nitrous anhydride||dinitrogen trioxide| |NO2||nitrogen dioxide||nitrogen dioxide ||nitrogen tetroxide||dinitrogen tetroxide| ||nitrogen trioxide | With the compounds of nitrogen and oxygen to use as examples, we see that there are often more ways for any two elements to combine with each other by covalent bonds than by ionic bonds. Many of the frequently seen compounds already have names that have been in use for a long time. These names, called common names, may or may not have anything to do with the makeup of the material, but more of the common names of covalent compounds are used than of the ionic compounds. The system names include numbers that indicate how many of each type of atom are in a covalent molecule. The Fake Greek Prefixes (FGP’s above in the chart) are used to indicate the number. It would be wise of you to know the In saying or writing the name of a binary covalent the FGP of the first element is said, then the name of the first element is said, then the FGP of the second element is said, and the name of the second element is said, usually with the ending “-ide” on it. The only notable exception for the rule is if the first mentioned element only has one atom in the molecule, in which case the “mono-“ prefix is omitted. CO is carbon monoxide. CO2 is carbon dioxide. In both cases there is only one carbon in the molecule, and the “mono-“ prefix is not mentioned. For oxygen the last vowel of the FGP is omitted, as in the oxides of nitrogen in the above table. COMMON NAMES OF BINARY COVALENT COMPOUNDS YOU SHOULD KNOW - H2O water - N2H4 hydrazine - CH4 methane - C2H2 acetylene Back to the top of COMPOUNDS CHECKLIST OF KNOWLEDGE FOR WRITING Here’s a checklist of the things you need to know to be able to correctly write the formulas for materials. - NAMES AND SYMBOLS OF THE ELEMENTS - NAMES AND SYMBOLS OF DIATOMIC GASES - NAMES, SYMBOLS, AND VALENCES OF THE ELEMENTS IN GROUPS 1, 2, 7, and 8 - NAMES, SYMBOLS, AND VALENCES OF METALS WITH ONE COMMON - NAMES AND VALENCES OF METALS WITH MORE THAN ONE COMMON - NAMES, FORMULAS, AND CHARGES OF COMMON POLYATOMIC IONS - NAMES AND FORMULAS OF COMMON ACIDS - HOW TO TELL THE DIFFERENCE BETWEEN COVALENT AND IONIC - HOW TO WRITE THE FORMULA OF IONIC COMPOUNDS - LIST OF FAKE GREEK PREFIXES UP TO TWELVE - HOW TO WRITE THE FORMULA OF BINARY COVALENT COMPOUNDS - COMMON NAMES OF SOME BINARY COVALENT COMPOUNDS Back to the top of COMPOUNDS MORE ON BONDS, SHAPES, AND OTHER FORCES Back to the top of COMPOUNDS THE CONTINUUM BETWEEN IONIC AND COVALENT BONDS In an attempt to simplify, some books may seem to suggest that covalent and ionic bonds are two separate and completely different types of attachment. A covalent bond is a shared pair of electrons. The bond between the two atoms of any diatomic gas, such as chlorine gas, Cl2, is certainly equally shared. The two chlorine atoms have exactly the same pull on the pair of electrons, so the bond must be exactly equally shared. In cesium fluoride the cesium atom certainly donates an electron and the fluoride atom certainly craves an electron. Both the cesium ion (Cs+) and the fluoride (F-) ion can exist in solution independently of the other. The bond between a cesium and a fluoride ion to make cesium fluoride (CsF) would be clearly ionic because the difference in electronegativities (ΔEN's) is so large. The amount of pull on an atom has on a shared pair of electrons, called electronegativity, is what determines the type of bond between atoms. Considering the Periodic Table without the inert gases, electronegativity is greatest in the upper right of the Periodic Table and lowest at the bottom left. The bond in francium fluoride should be the most ionic. Some texts refer to a bond that is between covalent and ionic called a polar covalent bond. There is a range of bond between purely ionic and purely covalent that depends upon the electronegativity of the atoms around that bond. If there is a large difference in electronegativity, the bond has more ionic character. If the electronegativity of the atoms is more similar, the bond has more covalent character. Back to the top of BONDS AND STRUCTURES Lewis structures are an opportunity to better visualize the valence electrons of elements. In the Lewis model, an element symbol is inside the valence electrons of the s and p subshells of the outer ring. It is not very convenient to show the Lewis structures of the Transition Elements, the Lanthanides, or Actinides. The inert gases are shown having the element symbol inside four groups of two electrons symbolized as dots. Two dots are above the symbol, two below, two on the right, and two on the left. The inert gases have a full shell of valence electrons, so all eight valence electrons appear. Halogens have one of the dots missing. It does not matter on which side of the symbol the dot is missing. Group 1 elements and hydrogen are shown with a single electron in the outer shell. Group 2 elements are shown with two electrons in the outer shell, but those electrons are not on the same side. Group 3 elements have three dots representing electrons, but the electrons are spread around to one per position, as in Group 2 elements. Group 4 elements, carbon, silicon, etc. are shown as having four electrons around the symbol, each in a different position. Group 5 elements, nitrogen, phosphorus, etc. have five electrons in the outer shell. In only one position are there two electrons. So Group 5 elements such as nitrogen can either accept three electrons to become a triple negative ion or join in a covalent bond with three other items. When all three of the unpaired electrons are involved with a covalent bond, there is yet another pair of electrons in the outside shell of Group 5 elements. Group 6 elements, oxygen, sulfur, etc., have six electrons around the symbol, again without any concern to position except that there are two electrons in two positions and one electron alone in the other two positions. Group 7 elements have all of the eight outside electrons spaces filled except for one. The Lewis structure of a Group 7 element will have two dots in all four places around the element symbol except for one. Let's start with two atoms of the same type sharing a pair of electrons. Chlorine atoms have seven electrons each and would be a lot more stable with eight electrons in the outer shell. Single chlorine atoms just do not exist because they get together in pairs to share a pair of electrons. The shared pair of electrons make a bond between the atoms. In Lewis structures, the outside electrons are shown with dots and covalent bonds are shown by bars. This covalent bond between chlorine is one of the most covalent bonds known. Why? A covalent bond is the sharing of a pair of electrons. The two atoms on ether side of the bond are exactly the same, so the amount of "pull" of each atom on the electrons is the same, and the electrons are shared equally. Next, let's consider a molecule in which the atoms bonded are not the same, but the bonds are balanced. Methane, CH4, is such a molecule. If there were just a carbon and a single hydrogen, the bond between them would not be perfectly covalent. Hydrogen has a slightly lower electronegativity than carbon, so the electrons in a single H-C bond would, on average, be closer to the carbon than the hydrogen. Carbon would be more negative. But the Lewis structure below shows that there are four hydrogens around a carbon atom, and that they are evenly separated. In the CH4 molecule, the four hydrogen atoms exactly balance each other out. The Lewis structure of methane does not have any electrons left over. The carbon began with four electrons and each hydrogen began with two electrons. Only the bars representing the shared pairs of electrons remain. The carbon now shares four pairs of electrons, so this satisfies the carbon's need for eight electrons in the outside shell. Each hydrogen has a single shared pair in the outside shell, but the outside shell of the hydrogen only has two electrons, so the hydrogen has a full outer shell also. (The Lewis structure as shown on the left is not the real thing. The hydrogens repel each other, so the shape of the methane molecule is really tetrahedral, but the effect is the same. The methane shape drawn in primitive 3-D to the right is a more accurate representation of the methane tetrahedral molecule.) Carbons and hydrogens are nice and easy to write in Lewis structures, because each carbon must have four attachments to it and each hydrogen atom must have one and only one attachment to it. When the bonds around a carbon atom go to four different atoms, the shape of the bonds around that carbon is roughly tetrahedral, depending upon what the materials are around the carbon. Carbons are also able to have more than one bond between the same two. Consider the series ethane (C2H6), ethene (C2H4),(common name is ethylene), and ethyne (C2H2), (common name is acetylene). H3 – C – C – H3 ethane H2 – C = C – H2 ethylene H – C ≡ C – H acetylene In writing the Lewis structure of compounds, the bars representing bonds are preferred to the dots representing individual electrons. The double bars between the carbons in ethylene, C=C, represent a double bond between the two carbons, that is four shared electrons to make a stronger attachment between the two carbons. The triple bars between the carbons of acetylene represent a triple covalent bond between those two carbons, C≡C, three pairs of shared electrons between those carbons. Every carbon has four bonds to it showing a pair of electrons to make eight electrons (or four orbitals) in the outer shell. Each hydrogen atom has one and only one bond to it for two electrons in the outer shell that occupies the only orbital that hydrogen has. All of the outer shells are usually filled. While we are doing this, notice that the Lewis structure of a molecule will show the shape of the molecule. All of the bonds in ethane are roughly the tetrahedral angle, so all of the hydrogen atoms are equivalent. This is true. The bonds in acetylene make it a linear molecule. The bonds in ethylene are somewhat trigonal around the carbons, and the carbons cannot twist around that bond as they can around a single bond, so that the molecule has a flat shape and the attachments to the carbons are not equivalent. This is also true. (You will see this in the study of organic chemistry. This type of difference between the positions of the hydrogen atoms is called cis - trans isomerism.) The Lewis structure shows the shape of a molecule or polyatomic ion with the bonds to each atom drawn at 90 degrees (right, left, up, and down) from the atomic symbol and the non – bonded electrons as dots, usually in pairs, around the atomic symbol in the left, right, up, and down positions around the atom. We could set up a group of general guidelines for the drawing of Lewis structures for simple molecules or polyatomic ions. Write all the atoms in the material in the form of the formula of the compound. CO2 can be an example. - Usually pick the atom with the lowest electronegativity (most distant from fluorine on the Periodic Table) to be the central atom or atoms. (In most organic compounds, carbon provides the main "skeleton" of the molecule.) The lowest electronegativity atom, the central atom, is usually written first in the compound. Carbon is the obvious candidate for the central atom. - Arrange the other atoms around the inner core according the formula of the material using single bonds to hold the structure together. This is called the skeleton structure. The skeleton structure for carbon dioxide should be: O – C – O - Count the Total Valence Electrons (TVE) of the molecule. This is done by adding up the electrons in the outside shell of each atom. This is easy for “main sequence” atoms, groups I, II, IIIA, IVA, VA, VIA, and VIIA. (See the Periodic Table as you do this.) Hydrogen and group I, or group 1, all have 1 electron in the outside shell. Group II or 2, starting with beryllium, have two electrons in the outer shell. Group IIIA or 13, starting with boron, have three electrons in the outer shell. Group IVA or 14, starting with carbon have four electrons in the outer shell. Group VA or 15, beginning with nitrogen, have five electrons in the outer shell. Group VIA or 16, beginning with oxygen, have six electrons in the outer shell. Group VIIA or 17, the halogens, have seven electrons in the outer shell. Notice that we usually don’t include the eight electrons in the outer shell of most inert gases because the noble elements do not usually make compounds. There are six valence electrons in each oxygen atom in CO2 for a total of 12 and four electrons in the carbon atom for a grand total of 16 electrons in the CO2 structure. TVE = 16 electrons. Carbon dioxide is a molecule and does not have a charge, but if you draw the Lewis structure of a polyatomic ion, you should add an electron for each negative charge and remove an electron from the TVE for each positive charge. - Subtract the number of electrons in the bonds of the skeleton structure from the TVE and you will have the number of electrons you have to represent as dots around the atoms. For CO2, the math is: TVE = 16 electrons Electrons in bonds = - 4 electrons (two bonds) Dots needed = 12 dots - Distribute the dots (representing electrons) around the structure to the terminal atoms first. Hydrogen does not get any dots. It has all the electrons it can take with just the bond. All other atoms get a maximum of four orbitals, six dots if the atom has one bond to it, four dots if the atom has two bonds to it, two dots if the atom has three bonds to it, and no bonds if it has four bonds to it. . . . . : O – C – O : This is the proposed shape for the CO2 molecule in the skeletal form. . . . . - The proposed shape above has some problems with it. There are too many electrons assigned to the oxygen atoms and not enough to the carbon. The way to express this idea is the formal charge. The formal charge is the number of electrons the atom brought to the structure minus the number of electrons shown in the proposed structure. The oxygen atoms both had six electrons in the valence shell because they are group VI A or group 16 atoms. They SHOW seven electrons in the proposed scheme, six dots and one electron from half the bond. 6 – 7 = - 1, so the formal charge of both the oxygen atoms is -1. The carbon atom brought four electrons, being from group IV A or 14. Carbon shows only two electrons, one from each of the bonds, so 4 – 2 = 2. The formal charge of the carbon is plus two. The difference in formal charge indicates that there is a problem, but it also shows a likely way to balance things out. - If you have a structure where there are atoms around a bond that have opposing charges, the likely way to even out those charges is to take a pair of electrons from the negative atom and make it part of a multiple bond with the positive atom. Now the CO2 molecule looks a lot better. We changed the single bond to a double bond on both sides of the carbon. Now the formal charge of all three atoms is zero (You check it yourself.), and there are four and only four orbitals around each atom. Each oxygen atom has two bonds and two unshared (lone) pairs of electrons for a total of four orbitals. The carbon has four bonds to it, four orbitals. This condition with the lowest number of formal charges and the right number of orbitals around each atom is the most stable and the most likely correct Lewis structure. . . . . : O = C = O : This process of writing Lewis structures is very limited to small molecules. There are many exceptions to the process, for instance, there are some compounds in which one atom has only three orbitals around it. BF3, boron trifluoride is one in which the boron atom (central) is stuck with just three bonds to it. Some central atoms can have MORE than four orbitals around them. There is a phosphorus trichloride molecule (PCl3) that has the same shape as ammonia, but there is also a phosphorus pentachloride molecule (PCl5) that has five chlorine atoms attached to a central phosphorus. As you see, the scope of this tutorial goes only so far into the Lewis structure world. With the warnings in mind, here are some general rules that can often (maybe 85% of the time) lead you to correct Lewis structures of small molecules. - ; HONC, pronounced “honk.” This is the way to remember that all Hydrogens have one and only one bond to them. Most Oxygens have two bonds to them. Most Nitrogens have three bonds to them, and most Carbons have four bonds to them. SO REMEMBER: HONC 1, 2, 3, 4. - Carbon is always a central atom, except in diatomic molecules like carbon monoxide. - Hydrogen is always a terminal atom with only one bond and no dots. - The lowest electronegativity atom (NOT the closest to fluorine) is usually the central atom. - The structures are usually balanced around the central atom. The Lewis structures are usually good indicators of the actual shape of the molecule. We can tell that from the properties of the molecules. Rarely, but sometimes the best – looking Lewis structure is not the structure that predicts the properties of the material. In this case, the Lewis structure is wrong, and it probably makes some sense once the Lewis structure is written in the way that goes with the properties of the material. Back to the top of BONDS AND STRUCTURES SHAPES AROUND AN ATOM, VSEPR THEORY There is no issue of shape around the Group 1 elements. There is only one attachment to them, so no angle is possible around them. But there are some molecular compounds with only two atoms, such as nitrogen monoxide, NO. The only feature of this molecule is the bond between the nitrogen atom and the oxygen atom. The small difference in electronegativity between the oxygen and the nitrogen give the molecule a small dipole, a small separation of charge, so a small amount of polarity. Because there are an odd number of electrons in NO, this makes for an interesting Lewis structure. Try it.) Iodine fluoride, IF, is another diatomic compound that should have some polarity. Diatomic molecules like chlorine gas, Cl2, have no electronegativity difference (ΔEN) from side to side of the bond, so they are completely balanced and completely non – polar. Group 2 elements have two electrons in the outer shell. Many of the compounds of Group 2 elements are ionic compounds, not really making an angle in a molecule. Molecules made with Group 2 elements that have two attached items to the Group 2 element have a linear shape, because the two attached materials will try to move as far from each other as possible. A linear shape means that a straight line could be made through all three atoms with the central element in the center. The shape of carbon dioxide is linear with the carbon in the center. O = C = O VSEPR stands for Valence Shell Electron Pair Repulsion. The idea is a disarmingly simple one. Electrons are all negatively charged, so they repel each other. If an atom has two electron groups around it, the electrons, and the atoms they are bonded to, are likely to be found as far as they can be from each other. “As far as they can get from each other,” and still remain attached to the central atom means that the angle around the central atom is 180 degrees, a straight line. Molecules with two electron groups attached to a central atom have a linear electron group shape and a linear molecular shape. Unless there is a large difference in electronegativity from one side to the other of a linear compound, there is no separation of charge and no polar character of the molecule. Covalent compounds with boron are good examples of trigonal shaped molecules. The trigonal shape is a flat molecule with 120 degree angles between the attached atoms. Again using the example of a boron atom in the center, the attached elements move as far away from each other as they can, forming a trigonal shape, also called triangular, or trigonal planar to distinguish it from the trigonal pyramidal shape of compounds like ammonia. BF3, boron trifluoride, is an example of a molecule with a trigonal planar shape. Each fluorine atom is attached to the central boron atom. There are three bonds to the boron, so the electron group shape is trigonal planar around boron. The molecular shape is also trigonal planar in boron trifluoride because each electron group has a fluorine atom attached to it. But, what if the central atom has two other atoms and a lone pair of electrons attached to it? Nitrogen oxychloride is an example of that. NOCl, is a molecule with nitrogen in the center (See how to write Lewis structures above.) and an oxygen and a chlorine atom attached to the central nitrogen. When we go through the skeleton structure and distribute the electron dots, we find that there is a double bond between the nitrogen and the oxygen and a lone pair (unshared pair) of electrons on the nitrogen in addition to the single bond from the nitrogen to the chlorine. There are three electron groups around the nitrogen, making the electron group shape more or less trigonal planar. But only two of those electron groups have an atom attached, so the molecular shape of nitrogen oxychloride is bent or angular. NOCl is not a balanced shape, so it is likely that there is some separation of charge within the molecule, making it a somewhat polar compound. Group 4 elements are not in the center of a flat molecule when they have four equivalent attachments to them. As with two or three attachments, the attached items move as far as they can away from each other. In the case of a central atom with four things attached to it, the greatest angle between the attached items does not produce a flat molecule. If you were to cut off the vertical portion of a standard three-legged music stand so that it was the same length as the three legs, the angles among all four directions would be roughly equal. Try this with a gumdrop or a marshmallow. Stick four different colored toothpicks into the center at approximately the same angle. If you have done it right, the general shape of the device will be the same no matter which one of the toothpicks is up. This shape is called tetrahedral. The shape of a tetrahedron appears with the attached atoms at the points of the figure and each triangle among any three of them makes a flat plane. A tetrahedron is a type of regular pyramid with a triangular base. Carbon is a group four element. Organic and biochemical compounds have carbon as a “backbone,” so this tetrahedral shape is very important. Methane, CH4, and carbon tetrachloride, CCl4, are good examples of tetrahedral shape. If you draw the Lewis structures of these compounds, you will see that there are four bonds to the central carbon atom, but no other electrons on the central atom. They have four electron groups (single bonds) around the central atom, so they have a tetrahedral electron group shape. Each bond to the central carbon has an atom attached, so they have a tetrahedral molecular shape. In both compounds, the four atoms attached to carbon are the same, so there is no separation of charge. All four atoms have the same electron pull in balanced directions, so these compounds are non – polar. Can a central carbon make molecules with other shapes around the central atom? Yes, you remember carbon dioxide, where there are two double bonds around the carbon. O = C = O Each double bond is an electron group, so there are only two electron groups around the carbon in carbon dioxide. See the “acid carbons,” the ones with the ionizable hydrogen (in blue) on it. The shape of around the acid carbons is trigonal planar because it has a double bond to it and only three electron groups, but the shape around the other carbons is tetrahedral. In the Lewis structures the atoms are drawn at ninety degrees from each other, but the real shape around those carbons exists in three – space. Group 5 elements, for instance nitrogen or phosphorus, will become triple negative as they add three electrons in ionic reactions, but this is rare. Nitrides and phosphides do not survive in the presence of water. Covalent bonds with these elements do survive in water. From the Lewis structure of these elements in the previous section, you know that Group 5 elements have the capability of joining with three covalent bonds, but they don’t make the trigonal shape because the UNSHARED PAIR OF ELECTRONS ACTS LIKE ANOTHER BONDED ATTACHMENT. The shape of the bonds and the lone pair of electrons around nitrogen and phosphorus is tetrahedral, just like the bonds around Group 4 elements. The molecular shape is trigonal pyramidal. See the images below. The one on the left is a Lewis structure representation of an ammonia molecule. The one on the right is an attempt at showing the 3-D shape of the same ammonia molecule. The color and the length of the bonds are only to show the shape better. Notice that the unshared pair (lone pair) of electrons actually repels MORE than the hydrogen atoms, so the angle between the hydrogen atoms is a little LESS than the tetrahedral angle of 109.5 degrees. Group 6 elements, oxygen and sulfur, have six electrons in the valence shell. The compounds they make usually have two pairs of unshared electrons. Just as in Group 5 elements, these pairs of unshared electrons serve as other attached atoms for the electron shape of the molecule. Group 6 elements make tetrahedral electron shapes, but now there are only two attached atoms. The angle between the hydrogens in water is about 105 degrees. This peculiar shape is one of the things that makes water so special. Group 7 elements have only one chance of attachment, so there is not usually any shape around these atoms. Back to the top of BONDS AND STRUCTURES INTERMOLECULAR FORCES IN WATER The alchemists of old had several other objectives aside from making gold. The thought of a fluid material that could dissolve anything, the universal solvent, was another alchemical project. No alchemist would say, though, what material would hold such a fluid. Surprisingly, the closest thing we have to a universal solvent is water. Water is not only a common material, but the range of materials it dissolves is enormous. The guiding principle for predicting which materials dissolve in which solvent is that 'like dissolves like.' Fluids in which the atoms are attached with covalent bonds will dissolve covalent molecules. Fluids with a separation of charge in the bonds will dissolve ionic materials. The bonds that hold hydrogen atoms to oxygen atoms are closer to covalent than ionic, but the bond does have a great deal of ionic character. Oxygen atoms are more electronegative than hydrogen atoms, so the electron pair is held closer to the oxygen atom. Another way to look at it is that only a very small number of water molecules are ionized at any one time. The ionization of water, H2O → H+ + (OH)- , into hydrogen ions (actually, hydronium ions) and hydroxide ions happens in only a very small number of the water molecules, but the effect is quite important as the reason for the existence of acids and bases. Materials of a mildly covalent nature, such as small alcohols and sugars, are soluble in water due to the mostly covalent nature of the bonds in water. The shape of the water molecule is bent at about a 105 degree angle due to the electron structure of oxygen. The two pairs of electrons that force the attached hydrogens into something close to a tetrahedral angle give the water molecule an unbalanced shape like a boomerang, with oxygen at the angle and the hydrogen atoms at the ends. We can think of the molecule has having an ‘oxygen side’ and a ‘hydrogen side’. Since the oxygen atom pulls the electrons closer to it, the oxygen side of the molecule has a slight negative charge. Cations (positive ions) are attracted to the partial positive charge on the oxygen side of water molecules. Likewise, the hydrogen side of the molecule has a slight positive charge, attracting anions. Polar materials such as salts, materials that have a separation of charge, dissolve in water due to the charge separation of water. The origin of the separation is called a dipole moment and the molecule itself can be called a dipole. The Lewis structure of water (on the right above) would almost tempt you to believe the molecular shape is linear. It is not. The actual shape is a little better shown as in the drawing on the right. The oxygen has FOUR electron groups around it, making the electron group shape tetrahedral. The drawing shows a larger than ninety degree angle between the hydrogen atoms and the two pairs of unshared electrons (lone pairs) as having one pair coming out of the screen towards you and the other pair going into the screen. The oxygen has a larger electronegativity, so there is a larger concentration of electrons (negative charge) to the left of the molecule. This dipole or separation of charge within the molecule makes water a polar solvent. It attracts positive ions to the oxygen side of the molecule and negative ions to the hydrogen side of the molecule. Molecules or atoms that have no center of asymmetry are non-polar. Atoms such as the inert gases have no center of asymmetry. Molecules such as methane, CH4, are likewise totally symmetrical. Very small forces, called London forces, can be developed within such materials by the momentary asymmetries of the material and induction forces on neighboring materials. These small forces account for the ability of non-polar particles to become liquids and solids. The larger the atom or molecule, the more potent the London forces, possibly due to the greater ability to separate charge within a larger particle. The larger the inert gas, the higher its melting point and boiling point. In alkanes, a series of non-polar hydrocarbon molecules, the larger the molecule, the higher the melting and boiling point. There may be London forces in water molecules, but the enormous force of the dipole interaction completely hides the small London forces. The dipole forces within water are particularly strong for two additional reasons. Dipole forces that involve hydrogen atoms around a strongly electronegative material such as nitrogen, oxygen, fluorine, or chlorine are particularly strong due to the small size of the hydrogen atom compared to the size of the dipole force. Such dipoles have significantly stronger forces, and have been called hydrogen bonds. In water, this effect is even greater due to the small size of the oxygen atom, thus the whole water molecule. In a water molecule hydrogen bonding is a large intermolecular force in a small volume on a small mass that makes it particularly noticeable. Compare methane, CH4, to water. They are similar in size and mass, but methane is non-polar and water is very highly polar due to the hydrogen bonding. The melting point for methane is -184 °C (89 K) and for water is 0 °C (273 K). The boiling point for methane is -161.5 °C (111.7 K) compared to water at 100 °C (373.2 K). The temperature range over which methane is a liquid is less than a quarter the range for water. Most of these differences are accountable from the hydrogen bonding of water. The properties of water come directly from the molecular shape of it and the forces it has on it from that shape. Water is cohesive. It balls up with itself in zero gravity or on a non – polar surface like waxed paper. The surface tension of water is another product of the cohesive forces, mainly hydrogen bonding. Water is adhesive, that is, it clings to other things. It wets cotton or paper, it wets glass or ceramic, and it dissolves many compounds, to include polar compounds. Water is a very important material for living things because: - It has a high heat capacity, or specific heat; water absorbs or releases large amounts of heat with small changes in temperature. - It has a large range of temperature in which it is a liquid. - It has a high heat of vaporization; it takes a lot of heat to change liquid water into steam. - It is one of the best solvents, particularly for ionic materials. - Water forms hydration layers around large charged particles like proteins and nucleic acids that make the functions of the macromolecules possible. - It serves as the body’s major transport medium. - Water is an important part of hydrolysis and dehydration synthesis reactions. - Water forms a resilient cushion around certain body organs. There are three main types of bonding forces, forces that make compounds. Ionic bonding is just the attraction of a positive ion for a negative ion. Sodium chloride is a compound that is made of sodium ions, having lost an electron, with a positive charge, and negative chloride ions, negative because they attract another electron to fill the valence shell. Covalent bonds come about by a bonded pair of atoms sharing a pair (or more pairs) of electrons. Covalent bonds are usually stronger than ionic bonds. Ionic bonds can separate in water solution. Polar covalent bonds, such as the bonds between the hydrogen and oxygen atoms of water, happen when two atoms sharing a pair of electrons have a large difference in electronegativity. Three main types of intermolecular forces, hydrogen bonding, dipole interactions, and dispersion forces, are forces that do not make compounds, but attract or repel on an atomic level. The name London forces (from Fritz London) is sometimes used for the small dipole interactions and even smaller dispersion forces. Dispersion forces are caused by the momentary unbalance of electrons around an atom. They are called “dispersion” forces for the uneven dispersion of electrons. Even noble gases can have these forces. In fact dispersion forces are the only forces that pull noble gases together. In atoms or small molecules, dispersion forces are very small. The melting and boiling points of noble gases are very low because it takes very little energy to overcome the dispersion forces. In macromolecules like proteins or nucleic acids, though, the dispersion forces can develop to be much larger. In proteins and nucleic acids, dispersion forces rival the magnitude of the dipole forces and even hydrogen bonding. Dipole forces, or dipole – dipole interactions are the forces from polar molecules pulling together by the difference in charge from one side of a molecule to another. Iodine fluoride, IF, is likely to have a small positive charge near the iodine and a small negative charge near the fluorine, because fluorine is by far the most electronegative. The IF molecules have a tendency to arrange themselves with the positive end of one molecule near the negative end of another molecule. The dipole forces of water are fairly large due to the highly polar nature of the water molecule. In water, the most powerful intermolecular force is hydrogen bonding. Hydrogen bonding is the tendency of hydrogen atoms attached to highly electronegative atoms like fluorine, chlorine, or oxygen to seek other highly electronegative atoms in other molecules. The forces can make liquids viscous and cohesive. Water owes its cohesive properties mostly to hydrogen bonding. But hydrogen bonding is even more important in macromolecules. The secondary, tertiary, and quaternary structures of macromolecules are due in large part to hydrogen bonding. The association of opposing nucleotides in nucleic acids is due to hydrogen bonding. In DNA, adenine and thymine have two hydrogen bonds between them, and guanine and cytocine have three hydrogen bonds between them. This preserves the sequence of DNA on the opposing strands. You might say that our biology depends on hydrogen bonds. Back to the top of COMPOUNDS Write chemical formula as requested. Show where needed. Show valences for all ions. 1. hydrochloric acid _________________ 2. sodium chloride ________________ 1. HCl 2. NaCl 3. uranium hexafluoride _____________ 4. strontium nitrate ________________ 3. UF6 4. Sr(NO3)2 5. calcium chloride _________________ 6. acetic acid ___________________ 5.CaCl2 6.HC2H3O2 7. phosphoric acid __________________ 8. ammonia ______________________ 7. H3PO4 8. NH3 9. chlorine ______________________ 10. lithium sulfate ___________________ 9. Cl2 11. potassium chromate ____________ 12. calcium hydroxide 13. aluminum foil _________________ 14. ammonium sulfate ______________ 13.Al 14.(NH4)2SO4 15. sulfuric acid __________________ 16. ammonium iodide ______________ 15. H2SO4 17. acetylene _____________________ 18. rubidium nitrite _______________ 17. C2H2 18. RbNO2 19. lead II sulfite __________________ 20. copper I sulfide ________________ 19. PbSO3 20.Cu2S 21. aluminum oxide _______________ 22. magnesium bromide _____________ 21.Al2O3 22.MgBr2 23. sodium chlorate ________________ 24. iron II chloride ________________ 23.NaClO3 24.FeCl2 25. hydrogen gas __________________ 26. silver chromate ________________ 25. H2 26. Ag2CrO4 27. zinc bicarbonate _______________ 28. barium oxide ________________ 27.Zn(HCO3)2 28.BaO 29. aluminum nitrate ______________ 30. diphosphorus pentoxide __________ 29.Al(NO3)3 30.P2O5 31. aluminum hydroxide ___________ 32. chromium III oxide _____________ 31.Al(OH)3 32.Cr2O3 33. lithium phosphate ________________ 34. ice ________________________ 33. Li3PO4 34. H2O 35. nitrogen dioxide _________________ 36. iron III oxide _________________ 35. NO2 36. Fe2O3 37. sodium peroxide ________________ 38. copper II oxide ________________ 37.Na2O3 38.CuO2 39. liquid nitrogen _______________ 40. lead II acetate _________________ 39.N2 40.Pb(C2H3O2)2 41. lead IV fluoride ________________ 42. ferrous bromide ________________ 41. PbF4 42. FeBr2 43. carbonic acid _________________ 44. silver bisulfite ________________ 43.H2CO3 44.AgHSO3 45. cupric hydroxide ________________ 46. nitric acid __________________ 45.Cu(OH)2 46.HNO3 47. mercury II bromide _______________ 48. stannic sulfide ________________ 47. HgBr2 48. SnS2 49. hydrofluoric acid _______________ 50. potassium phosphate _____________ 49. HF 50. K3PO4 51. iodine tribromide _______________ 52. phosphorus pentafluoride __________ 51. IBr3 52. Back to the top of Compounds. Numbers and Math Units and Measures States of Matter Mols, Stoichiometry, and Percents Oxidation and Reduction Reactions Acids and bases Copyright 1997-2013 Chemtutor, LLC. All rights reserved. Publication of any part is prohibited without prior written consent, except for instructor use publishing for instructor's own classroom students. All hard copy materials distributed under this exception must have on every page distributed reference to http://www.chemtutor.com as source. Under the same exception granted to classroom teachers, full recognition of Chemtutor must be given when all or any part is included in any other electronic representation, such as a web site, whether by direct inclusion or by hyperlink.
http://www.chemtutor.com/compoun.htm
13
144
High School Trigonometry/Angles in Triangles The word trigonometry derives from two Greek words meaning triangle and measure. As you will learn throughout this chapter, trigonometry involves the measurement of angles, both in triangles, and in rotation (e.g, like the hands of a clock.) Given the important of angles in the study of trigonometry, in this lesson we will review some important aspects of triangles and their angles. We’ll begin by categorizing different kinds of triangles. Learning Objectives - Categorize triangles by their sides and angles. - Determine the measures of angles in triangles using the triangle angle sum. - Determine whether or not triangles are similar. - Solve problems using similar triangles. Triangles and Their Interior Angles Formally, a triangle is defined as a 3−sided polygon. This means that a triangle has 3 sides, all of which are (straight) line segments. We can categorize triangles either by their sides, or by their angles. The table below summarizes the different types of triangles. Table 1.6: Types of triangles Name Description Note Equilateral/equi-angular A triangle with three equal sides and three congruent angles. This type of triangle is acute Isosceles A triangle with two equal sides and two equal angles. An equilateral triangle is also isosceles. Scalene A triangle with no pairs of equal sides. Right A triangle with one 90° angle. It is not possible for a triangle to have more than one 90° angle (see below). Acute A triangle in which all three angles measure less than 90°. Obtuse A triangle in which one angle is greater than 90°. It is not possible for a triangle to have more than one obtuse angle (see below). In the following example, we will categorize specific triangles. Determine which category best describes the triangle: a. A triangle with side lengths 3, 7, and 8. b. A triangle with side lengths 5, 5, and 5. c. A triangle with side lengths 3, 4, and 5. a. This is a scalene triangle. While there are different types of triangles, all triangles have one thing in common: the sum of the interior angles in a triangle is always 180°. You can see why this true if you remember that a straight line forms a "straight angle", which measures 180°. Now consider the diagram below, which shows the triangle ABC, and a line drawn through vertex B, parallel to side AC. Below the figure is a proof of the triangle angle sum. - If we consider sides AB and CB as transversals between the parallel lines, then we can see that angle A and angle 1 are alternate interior angles. - Similarly, angle C and angle 2 are alternate interior angles. - Therefore angle A and angle 1 are congruent, and angle C and angle 2 are congruent. - Now note that angles 1, 2, and B form a straight line. Therefore the sum of the three angles is 180°. - We can complete the proof using substitution: We can use this result to determine the measure of the angles of a triangle. In particular, if we know the measures of two angles, we can always find the third. Find the measures of the missing angles. a. A triangle has two angles that measures 30° and 50°. b. A right triangle has an angle that measures 30°. c. An isosceles triangle has an angle that measures 50°. 180 - 30 - 50 = 100 The triangle is a right triangle, which means that one angle measures 90°. So we have: 180 − 90 − 30 = 60 There are two possibilities. First, if a second angle measures 50°, then the third angle measures 80° as 180 − 50 − 50 = 80. In the second case, the 50° angle is not one of the congruent angles. In this case, the sum of the other two angles is 180 − 50 = 130. Therefore the two angles each measure 65°. Notice that information about the angles of a triangle does not tell us the lengths of the sides. For example, two triangles could have the same three angles, but the triangles are not congruent. That is, the corresponding sides and the corresponding angles do not have the same measures. However, these two triangles will be similar. Next we define similarity and discuss the criteria for triangles to be similar. Similar Triangles Consider the situation in which two triangles have three pairs of congruent angles. These triangles are similar. This means that the corresponding angles are congruent, and the corresponding sides are proportional. In the triangles shown above, we have the following: - Three pairs of congruent angles: - The ratios of sides within one triangle are equal to the ratios of sides within the second triangle: - The ratios of corresponding sides are equal: Recall that these triangles are considered to be similar because they have three pairs of congruent angles. This is just one of three ways to determine that two triangles are similar. The table below summarizes criteria for determining if two triangles are similar. A special case of SSS is "HL", or "hypotenuse leg". This is the case of two right triangles being similar. This case is examined in example 5 below. Determine if the triangles are similar. The triangles are similar. Recall that for every right triangle, we can use the Pythagorean Theorem to find the length of a missing side. In ABC we have: Similarly, in triangle DEF we have: Therefore the sides of the triangles are proportional, with a ratio of 2:1. Because we will always be able to use the Pythagorean Theorem in this way, two right triangles will be similar if the hypotenuse and one leg of one triangle are in proportion with the hypotenuse and one leg of the second triangle. This is the HL criteria. Applications of Similar Triangles Similar triangles can be used to solve problems in which lengths or distances are proportional. The following example will show you how to solve such problems. Use similar triangles to solve the problem: A tree casts a shadow that is 24 feet long. A person who is 5 feet tall is standing in front of the tree, and his shadow is 8 feet long. Approximately how tall is the tree? The picture shows us similar right triangles: the person and his shadow are the legs of one triangle, and the tree and its shadow form the legs of the larger triangle. The triangles are similar because of their angles: they both have a right angle, and they share one angle. Therefore the third angles are also congruent, and the triangles are similar. The ratio of the triangles' lengths is 3:1. If we let h represent the height of the tree, we have: Lesson Summary In this lesson we have reviewed key aspects of triangles, including the names of different types of triangles, the triangle angle sum, and criteria for similar triangles. In the last example, we used similar triangles to solve a problem involving an unknown height. In general, triangles are useful for solving such problems, but notice that we did not use the angles of the triangles to solve this problem. This technique will be the focus of problems you will solve later in the chapter. Points to Consider - Why is it impossible for a triangle to have more than one right angle? - Why is it impossible for a triangle to have more than one obtuse angle? - How big can the measure of an angle get? Review Questions - Triangle ABC is an isosceles triangle. If side AB is 5 inches long, and side BC is 7 inches long, how long is side AC? - Can a right triangle be an obtuse triangle? Explain. - A triangle has one angle that measures 48° and a second angle that measures 28°. What is the measure of the third angle in the triangle? - Claim: the two non-right angles in any right triangle are complements. - In triangle DOG, the measure of angle O is twice the measure of angle D, and the measure of angle G is three times the measure of angle D. What are the measures of the three angles? - Triangles ABC and DEF shown below are similar. What is the length of ? - In triangles ABC and DEF above, if angle A measures 30°, what is the measure of angle E? - Determine if the triangles are similar: - A building casts a 100−foot shadow, while a 20 foot flagpole next to the building casts a 24 foot shadow. How tall is the building? - Explain in your own words what it means for triangles to be similar. Review Answers - Either 5 inches or 7 inches. - A right triangle cannot be an obtuse triangle. If a triangle is right triangle, one angle measures 90 degrees. If a triangle is obtuse, one angle measures greater than 90. Therefore the sum of the two angles would be greater than 180 degrees, which is not possible. - (a) The angle sum in the triangle is 180. If you subtract the 90−degree angle, you have 180 − 90 = 90 degrees, which is the sum of the remaining angles. - (b) 90 − 23 = 67° - (a) No - (b) Yes, by SSS or HL - 83 ft - Answers will vary. Responses should include (1) three pairs of congruent angles and (2) sides in proportion, or some other notion of "scaling up" or "scaling down". - acute angle - An acute angle has a measure of less than 90 degrees. - Two angles are congruent if they have the same measure. Two segments are congruent if they have the same lengths. - acute triangle - A triangle with all acute angles. - isosceles triangle - A triangle with two congruent sides, and, consequentially, two congruent angles. - equilateral triangle - A triangle with all sides congruent, and, consequently, all angles congruent. - scalene triangle - A triangle with no pairs of sides congruent. - One of the two shorter sides of a right triangle. - The longest side of a right triangle, opposite the right angle. - obtuse angle - An angle that measures more than 90 degrees. - parallel lines - Lines that never intersect. - right angle - An angle that measures 90 degrees. - A line that intersects parallel lines.
http://en.wikibooks.org/wiki/High_School_Trigonometry/Angles_in_Triangles
13
50
Introduction to Eigenvalues and Eigenvectors What eigenvectors and eigenvalues are and why they are interesting Introduction to Eigenvalues and Eigenvectors ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - For any transformation that maps from Rn to Rn, we've done - it implicitly, but it's been interesting for us to find the - vectors that essentially just get scaled up by the - So the vectors that have the form-- the transformation of - my vector, is just equal to some scaled-up - version of a vector. - And if this doesn't look familiar, I can jog your memory - a little bit. - When we were looking for basis vectors for the - transformation-- let me draw it. - This was from R2 to R2, - from R2 to R2. - So let me draw R2 right here. - Now let's say I had the vector...let's add the vector...let's say v1 was equal to - the vector 1, 2. - And we had the lines spanned by that vector. - We did this problem several videos ago. - And I had the transformation that flipped across this line. - So if we call that line l, T was the transformation from R2 - to R2 that flipped vectors across this line. - So it flipped, flipped vectors, flipped vectors, across l. - So if you remember that transformation, if I had some - random vector that looked like that, let's say that's x, - that's vector x, then the transformation of x looks - something like this. - Which is flipped across that line. - That was the transformation of x. - And, if you remember that video, we were looking for a - change of basis that would allow us to at least figure - out the, the matrix for the transformation, at least an - alternate basis. - And then we could figure out the matrix for the - transformation in the standard basis. - And the basis we picked were basis vectors that didn't get - changed much by the transformation, or ones that - only got scaled by the transformation. - For example, when I took the transformation of v1, when I took the transformation of v1, it just - equaled to v1. - Or we could say, that the transformation of v1, just - equaled 1 times v1. - So if you just follow this, this little format that I set up - here, lambda, in this case, would be 1. - And of course, the vector in this case is v1. - The transformation just scaled up v1 by 1. - Now if you also or if you, that same problem, we had the other vector that - we also looked at. - OK, it was the vector... it was the vector...minus let's say it's the vector v2, - which is-- let's say it's 2 minus 1. - And then if you take the transformation of it, since it - was orthogonal to the line, it just got - flipped over like that. - And that was a pretty interesting vector force as - well, because the transformation of v2 in this - situation is equal to what? - Just minus v2. - It's equal to minus v2. - Or, you could say, that the transformation of v2 is equal - to minus 1 times v2. - And these were interesting vectors for us because when we - defined a new basis with these guys as the basis vector, it - was very easy to figure out our transformation matrix. - And actually, that basis was very easy to compute with. - And we'll explore that a little bit more in the future. - But hopefully you realize that these are interesting vectors. - There was also the cases where we had the planes spanned by - some vectors. - And then we had another vector that was popping out of the - plane like that. - And we were transforming things by taking the mirror - image across this and we're like, well in that - transformation, these red vectors don't change at all - and this guy gets flipped over. - So maybe those would make for good bases. - Or those would make for good basis vectors. - And they did. - So in general, we're always interested with the vectors - that just get scaled up by a transformation. - It's not going to be all vectors, right? - This vector that I drew here, this vector x, it doesn't just - get scaled up, it actually gets changed, this direction - gets changed. - The vectors that get scaled up might switch direct-- might go - from this direction to that direction, or maybe - they go from that. - Maybe that's x and then the transformation of x might be a - scaled up version of x. - Maybe it's that. - The actual, I guess, line that they span will not change. - And so that's what we're going to concern ourselves with. - These have a special name. - And they have a special name and I want to make this very - clear because they're useful. - It's not just some mathematical game we're - playing, although sometimes we do fall into that trap. - But they're actually useful. - They're useful for defining bases because in those bases - it's easier to find transformation matrices. - They're more natural coordinate systems. And - oftentimes, the transformation matrices in those bases are - easier to compute with. - And so these have special names. - Any vector that satisfies this right here is called an - eigenvector for the transformation T. - And the lambda, the multiple that it becomes-- this is the - eigenvalue associated with that eigenvector. - So in the example I just gave where the transformation is - flipping around this line, v1, the vector 1, 2 is an - eigenvector of our transformation. - So 1, 2 is an eigenvector. - And it's corresponding eigenvalue is 1. - This guy is also an eigenvector-- the - vector 2, minus 1. - He's also an eigenvector. - A very fancy word, but all it means is a vector that's just - scaled up by a transformation. - It doesn't get changed in any more meaningful way than just - the scaling factor. - And it's corresponding eigenvalue is minus 1. - If this transformation-- I don't know what its - transformation matrix is. - I forgot what it was. - We actually figured it out a while ago. - If this transformation matrix can be represented as a matrix - vector product-- and it should be; it's a linear - transformation-- then any v that satisfies the - transformation of-- I'll say transformation of v is equal - to lambda v, which also would be-- you know, the - transformation of [? v ?] - would just be A times v. - These are also called eigenvectors of A, because A - is just really the matrix representation of the - So in this case, this would be an eigenvector of A, and this - would be the eigenvalue associated with the - So if you give me a matrix that represents some linear - You can also figure these things out. - Now the next video we're actually going to figure out a - way to figure these things out. - But what I want you to appreciate in this video is - that it's easy to say, oh, the vectors that - don't get changed much. - But I want you to understand what that means. - It literally just gets scaled up or maybe they get reversed. - Their direction or the lines they span - fundamentally don't change. - And the reason why they're interesting for us is, well, - one of the reasons why they're interesting for us is that - they make for interesting basis vectors-- basis vectors - whose transformation matrices are maybe computationally more - simpler, or ones that make for better coordinate systems. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
http://www.khanacademy.org/math/linear-algebra/alternate_bases/eigen_everything/v/linear-algebra--introduction-to-eigenvalues-and-eigenvectors
13
107
The basic element in a computer can be an electromagnetic relay, a vacuum tube, or a transistor. All three of these can function as switches which are electrically controlled. Of course, vacuum tubes and transistors can also function as analogue amplifiers, producing an output which is an amplified replica of the input; they don't need to be simply either "off" or "on". This means that a computer can use small and cheap vacuum tubes or transistors; while its components need to be reliable, they don't need to be high-quality in all respects. One transistor logic family, ECL (emitter-coupled logic), used in many of the highest-speed computers, takes advantage of the fact that a transistor can respond more quickly to its input if it does not have to go all the way from a fully-on state to a fully-off state. With relays, it is simple to understand how basic logic functions, such as AND, OR, and NOT, can be implemented: In the top of the diagram, we see a single relay in both its OFF and ON states. When no current is flowing through the magnet, the armature is at rest, so the switch on the left, near the coil, is closed, while the switch on the right, away from the coil, is open. If one connection to each switch is connected to the power, then the other connection on the first switch provides a signal that is the opposite of that going into the magnet, and the other connection on the second switch provides a signal that is the same as that going into the magnet. This shows how a relay can be used as an inverter. In the bottom of the diagram, we see how the AND and OR functions are achieved with relays. For current to flow out of a switch, the switch needs to be on, and the other end of the switch has to be connected to a possible source of current. So the signals A and B both must be live for the first relay in the diagram to produce a signal, and the second relay applies the signal labelled C as well, to produce a three-input AND gate. The third relay on the botton produces the AND of D and E. As signals are either live and positive, or disconnected (and thus in what is known as a "high impedance" state) the outputs of the second and third relays simply need to be joined together to perform the OR operation. Electronic logic, involving vacuum tubes or transistors is not quite as simple and convenient as relay logic, but the basic logical functions can still all be produced. This diagram shows two types of gates that can be made from vacuum tubes. The first is a triode NOR gate, the second a pentode NAND gate. The pentode NAND gate needs a level translation circuit so that its output has the same voltage levels as its inputs, and its power supplies are at significantly different voltages from its logic levels. The diagram below: shows a few familes of solid-state logic. Note that the gates have been named with the assumption that a positive voltage represents 1 and a negative voltage represents 0. This is merely a convention. In a logic family where the NOR gate is the basic construct, since AND gates are more common than OR gates in most digital circuits, the convention can be reversed so that the NOR gate becomes a NAND gate. The AND and OR gates shown for diode logic are not complete in themselves; in addition to the fact that an inverter cannot be made without amplifying components, the lack of amplification limits the complexity of logic circuits that can be built with them. But computers have been built using primarily diode logic, and the occasional vacuum tube for amplification; and a gate design involving diode logic governing the input to a transistor is the core of the logic family diode-transistor logic (DTL), not shown in the diagram. Both RTL and DTL had some significant limitations that TTL overcame, making it the most popular bipolar logic family. ECL, because it made use of a switch between smaller signal levels, did not require waiting for transistors to go into saturation; while it was elaborate, and consumed more current, it was therefore used when the very highest performance was desired. Only one logic family using MOSFET transistors is illustrated, CMOS. In CMOS, every logic gate is implemented twice, once as itself in positive logic, and once as its opposite in negative logic. Although this seems wasteful, it has important advantages. A CMOS gate connects its output either to the positive supply or to ground. It doesn't contain any resistors, so it doesn't produce voltages that are produced by the continuous flow of current through a resistor. Relay logic also had this desirable characteristic; but power was still constantly consumed, through the coil of the relay's electromagnet, whenever the relay was on. The input of a transistor is itself of high impedance, so it only demands that a small trickle of current flow into it. This very low power consumption made the extremely high packing density of current integrated circuits feasible. It is not entirely without disadvantages. Bipolar transistors come in two kinds, PNP and NPN. And, similarly, there are two kinds of field-effect transistors, p-channel and n-channel. The NPN and the n-channel transistor are generally preferred for higher quality circuits (thus, NMOS was used for microprocessors while PMOS was used for calculator chips); metals conduct using electrons and not holes, and, thus, it is easier for low resistivities to be achieved in semiconductors doped with a donor impurity. A CMOS gate requires both kinds of MOSFET, and is thus limited by the characteristics of its p-channel MOSFETs. The diagram below shows, on the left, how CMOS circuitry is often constructed in practice: instead of using separate CMOS NAND and NOR gates, more complex circuits combining AND and OR functions are built up, along with their mirror images, on each side of a compound gate. The right side of the diagram illustrates what is known as domino logic. This addresses the problem that the p-channel MOSFETS limit the performance of a CMOS gate by building the logic circuit out of n-channel MOSFETS only, using only one p-channel MOSFET, along with a corresponding n-channel MOSFET for a clock signal. The result of doing this, called dynamic CMOS, is a circuit whose output can drive regular CMOS circuitry, but not another dynamic CMOS circuit; the addition of a CMOS inverter on the output, as shown, leads to domino logic. In practice, the logic circuit built from n-channel MOSFETS would be more complex than the three-gate example shown here. The diagram below illustrates how a CMOS NOR gate works for the different possible inputs it may receive: Areas at a positive potential are shown in red, those at a negative potential are shown in blue. By-ways, down which very little current, and that incidental to the operation of the device, would flow are shown in a lighter color. Note that the path between the two series transistors in the case of two positive inputs to the device is shown as gray, as it is isolated by two non-conducting transistors from both the positive and negative supplies. Current flows to the output from the positive supply at the top when a negative input makes both of the p-channel MOSFETs in series at the top conduct; current flows from the output into the ground at the bottom when a positive input makes either of the n-channel MOSFETs in parallel at the bottom conduct. Note that one way current can be consumed in CMOS is when, during switching, for a brief moment both the top and bottom parts conduct, if one pair of transistors receives signals before the other pair. Also note that the other transistor logic families did not attempt to obtain an AND logic function by placing transistors in series; this generally means that the input voltages to the transistors would differ, one being closer to the positive supply than the other, and that the voltage across the transistor would differ. Here is a diagram of how a CMOS NOR gate, such as the one described above, might look on an integrated circuit chip fabricated in an n-well process: Note the key to the various areas on the chip, and the schematic at the right which attempts to illustrate the location of the transistors in the actual form of the gate. More information on how modern CMOS chips are designed and made is available here. A bipolar logic family once thought very promising that also dispenses with resistors is Integrated Injection Logic. In I2L, the fundamental unit is not the AND or OR gate; logic is accomplished by a wired-OR function. But inverters with multiple outputs are needed, so that multiple OR combinations involving one common signal are kept separate. A wired-OR between three two-output inverters is shown in the diagram above, and the equivalent construct using relays as logic elements is shown in the lower right of the diagram. While there are no actual resistors in I2L logic, the PNP transistor whose emitter is connected to +V in the two-output inverters shown above has a function similar to that of a resistor. But because it is a transistor, it responds to the voltage level connected to its collector, leading to similar economies of energy to those of CMOS, if not quite as close to perfection. Closely related to ECL, a little-known early high-speed logic family was CTL, or complementary transistor logic. Fairchild was one of the main companies producing ICs belonging to this family. (Interestingly enough, they currently make microchips with a trademarked technology called Current Transfer Logic, this being a completely different, low-power and low-noise technology, but having the same initials.) It is of some historical importance; this is the logic family used in the NEAC 2200/500, produced by NEC, which was the first Japanese computer to use only ICs, and no discrete transistors, for its logic, and, closer to home, it was the logic family used in the IC-6000 by Standard Computer, which was a microprogrammable computer, available in 1966, which could either emulate the IBM 7090 family of computers, or use a custom instruction set for enhanced FORTRAN performance. The following illustration of a CTL gate: was only possible thanks to this site, which seems to be the only place where this logic family is still described on the Web! The site notes that, since like ECL, it is a design based on analogue amplification, obtaining speed by avoiding saturating the transistors, noise can propagate through the design. Although this looks like the part of the ECL gate that performed a NOR logic function, since the two input transistors have been reversed, this is an AND gate, since it is now a low input voltage rather than a high one which would make these transistors conduct. Since I wrote this, I was able to find more information about this logic family through a 1969 Fairchild catalogue. This logic family is actually referred to as Complementary Transistor Micrologic there, and perhaps one of its most serious limitations was that in addition to ground and +4.5 volts, a power supply of -2 volts was also required by it. There are many other possible logic families. For example, in the IBM System/390 computer, a logic family called DCS was used. A DCS gate appears almost identical to an ECL gate, except that the complement of an input bit goes to the other arm of the differential amplifier instead of simply having only one transistor there with a reference voltage. The following diagram illustrates the arrangement of a core plane: The wires running vertically and horizontally through the ferrite rings are the drive wires. The current going through each wire is carefully chosen, in relation to the magnetic characteristics of the cores, so that if a current flows through one of the drive wires passing through a core, nothing will happen, but if a current flows through both of the drive wires that pass through the core, the core will become magnetized in the direction determined by those currents. One of the results of the need to choose the current used carefully was that many core memory arrays either had their temperature controlled, or their temperature was measured, and circuitry was used to adjust the currents used to correspond with their current operating temperature. This difficulty was reduced, although not eliminated, with the use of lithium-ferrite cores in the early 1960s. One early mainframe computer that was advertised as not needing to be placed in an air-conditioned room because of the use of this new, improved type of core memory was the Honeywell 300, a scientific computer with a 24-bit word length. Lithium-ferrite cores were used in the Apollo Guidance Computer, the Hewlett-Packard 9100 calculator (which still also used temperature-compensated drive circuits), and the CDC 5100 computer (this was a ruggedized 16-bit minicomputer for military use; I haven't forgotten that it was IBM that made the 5100 Portable Computer) as well. As one reference notes that lithium ferrites came to dominate the core memory industry, presumably they were used in many other computers as well. Various further improvements, involving adding zinc, cobalt, or even calcium to lithium-ferrite cores were developed. Thus, one patent notes that manganese reduces magnetostrictive ringing, nickel makes for a more pronounced degree of hysteresis, and zinc reduces the strength of the magnetic field required to change the state of a core. Elsewhere, it is noted that the starting material for lithium-ferrite cores, before dopants are added, is the inverse spinel form, Li0.5Fe2.5O4, derived from Fe3O4, magnetite, presumably through replacing pairs of Fe2+ ions, which are found on some of the octahedral sites, with an Li1+ ion and an Fe3+ ion. Incidentally, I first found this information on the composition of lithium ferrite from a paper by Ampex on the use of this substance in devices to produce phase-shifting of microwaves, which it referred to as phasers. Seeing "lithium" and "phasers" together in a context outside of that in which it might be expected, as you might guess, gave me some amusement. Spinel, MgAl2O4, a mineral which, like garnet, used to be confused with ruby, gave the spinel crystal structure its name. In a normal spinel, the triply-ionized ions are all on octahedral sites while the doubly-ionized ions are split between octahedral and tetrahedral sites; in an inverse spinel, the doubly-ionized ions are all on octahedral sites, while the triply-ionized ions are split between octahedral and tetrahedral sites. The vector sum of the two currents will follow the same diagonal direction as the other wire shown passing through each core, the sense wire, and thus the magnetization will be along the circumference of the ferrite ring, in either a clockwise or a counter-clockwise direction. If a core is being magnetized in the same direction in which it is already magnetized, not much will happen, but if it is magnetized in the opposite direction, a faint electrical pulse will be detected along the sense wire. Hence, one reads a bit from a core plane by storing a zero in that bit; then, after one has read the bit that was there, one can go back and write the old value back in again. Normally, of course, a whole computer word is read from a core memory at once, and so several core planes in parallel are read or written; thus, there is no opportunity to skip the write-back step if a zero is read. Also, it is important that the two currents from the drive lines flow through the core in the same direction; whether the pulse on the sense line is positive or negative is not important, since the amplifier for that line can be designed to produce a logic signal in either case. Thus, the annular faces of the ferrite cores have been colored either red or blue in the diagram; inspection of the diagram will show that on any drive line, horizontal or vertical, a red face is always directed in the opposite direction, with respect to that drive line's direction, to that in which a blue face is directed. Note also that the ferrite rings have been tilted back, so that one of their faces is visible, and they could have been tilted back in either of two directions in each case. The pattern chosen in this diagram was one which has, as a result, the property that the cores oriented along each of the two diagonal axes of the diagram alternate in the direction they are tilted as the sense line alternates in the direction in which it passes through them; thus, the color of the face of the core indicates the polarity of the pulse it will send on the sense line. As long as both the vertical and horizontal drive lines alternate from one line to the next in the direction current travels through them, if the two drive lines passing through one core work in the same direction, they will all work in the same direction. The type of core memory illustrated above was known as 2 1/2 dimension core memory. A core in the type of core plane shown above is selected when both the X and Y lines going through that core have a pulse sent through them. When several core planes contain the individual bits of a word in memory, this means that for either the X lines or the Y lines, at least, separate drive circuits are needed for each plane. This was avoided in standard 3-D core memory: Here, the wire shown in red was used as the sense line for reading, and when data was being written to core, it served as the inhibit line. The vertical wires, called the X lines, because the one that is used determines the horizontal position or x-coordinate of the core to be selected, have the inhibit line running parallel to them, but in the opposite direction. The X lines for the same column, and the Y lines for the same row, are connected together between all the planes, but the sense/inhibit line for each plane is separate. A pulse running through the inhibit line for a given plane cancels out the pulse running through the X line for that plane, allowing the contents of a word being written to memory to be controlled. But wait a moment! We noted above that two pulses will change the data stored in a core, but one pulse is not enough. So it is true that a pulse down the inhibit line will prevent the core in that plane at the intersection of the X and Y lines in use from being written. But what about all the other cores in that plane along the same Y line? Don't they all have two pulses going through them, one through the inhibit line, which is not cancelled, and one through the Y line? One way to deal with this is to send a pulse that is only half as strong down the inhibit line. If 1 1/2 pulses do not change the data stored in a core, then along the Y line in other planes, the bit that is to be written gets 2 pulses, and the others get only 1 pulse, but in the planes where the bit is not to be changed, all the cores along the Y line with a signal get 1 1/2 pulses. But it may not be necessary to change the strength of the pulse down the inhibit line, and design the cores to higher tolerances. The pulse going through the inhibit line is opposite in polarity to the pulse going through the X lines, not only in the core where it cancels the pulse through the X line, but in the other cores where there is no signal through the X line to cancel. Because this signal is of the opposite type, its sum with a pulse through the Y line differs in direction by 90 degrees, or a right angle. Thus, while we have a sum that is strong enough to magnetize a core, it is not acting so as to magnetize the core along its circumference; the magnetic field instead crosses the core at right angles, and thus may have no significant effect. For a short time, fast memories were made using another application of the same principle. In a thin-film memory, one of the two wires passing by a spot of magnetic material, if the spots in its row were to be selected, always had current flowing in the same direction. The other wire, passing perpendicular to it, could have current flowing either upwards or downwards. The material was fabricated so that it retained magnetism well only in the direcion induced by the wire that could have current going through it in either direction. But the strength of the signals going through the wires was chosen so that it would take both wires working together to shift the magnetism of a spot. Thus, when initially written, the direction of magnetization of each spot was in one of two directions differing by 90 degrees, but it would then settle into one of two directions separated by 180 degrees. A small signal down the wire through which current flowed only in one direction would disturb the direction of magnetism in the spots temporarily; because the spots were fully saturated, rotating their direction of magnetism was the result, causing a change in the perpendicular direction as well, so that when the magnetization bounced back, a signal went through the other family of wires. This non-destructive readout principle is illustrated below for the form of biaxial core memory sold under the Biax trademark. The Univac 1107 computer had a small register file, containing a limited number of alternate sets of registers, made from thin-film memory; this comprised 128 words of 36 bits. Several computers made by Univac for military applications had larger thin-film memories. The IBM System/360 Model 95 computer had a thin-film main memory one megabyte in size, in addition to a supplementary core memory that was four megabytes in size, and was the only IBM computer to use this memory technology. IBM experienced a major struggle in producing a thin-film memory that was not subject to pattern sensitivity, a flaw where some bit patterns cannot be stored properly. The solution found was to cover the memory plane with sheets of a soft magnetic material; these provided a return path for magnetic fields, not dissimilar to that which is intrinsic in the round shape of a magnetic core. This technology was soon superseded by solid-state memory in any event; the ILLIAC IV, and the ASC (Advanced Scientific Computer) from Texas Instruments were two computers originally planned to have thin-film memories which ended up having semiconductor memories instead. Incidentally, the later IBM System/360 Model 195 used the same four-megabyte core memory as the Model 95, but this time as its main memory; it had a cache made from semiconductor memory which contained 32 kilobytes, following the success of a cache memory in improving the performance of the IBM System/360 Model 85 computer. The core memory used on the System/360 Model 95 and 195 had a cycle time of 750 nanoseconds; this was very fast for core memory. In comparison, ordinary core memory might have had a cycle time from 2 to 5 microseconds, and slow core memory, used as "bulk core", might have a cycle time of 10 microseconds. The thin film memory on the Model 95 had a cycle time of 120 nanoseconds and an access time of 67 nanoseconds; in comparison, the semiconductor cache of the Model 195 had a cycle time of 54 nanoseconds. The speed of core memory, of course, improved during the years in which it was in use. Sometimes, high-speed memories were made using other types of magnetic core than the simple toroidal, or doughnut-shaped, core. For example, rectangular cores with three holes in a line were studied as a possible high-speed core for use on the STRETCH computer from IBM, and cores that were square prisms, with holes running through them at right angles, called biaxial cores, were used as the fast memory for microcode on the Packard-Bell 440 computer, as well as being previously used on the Univac LARC computer for its registers (if I remember correctly). This diagram illustrates three major forms of nondestructive read-out magnetic core memory that were in use, the transfluxor, the Biax, and plated-wire memory. On the left of the diagram, we see the transfluxor. This drawing is conceptual; when there is only one small hole, instead of two or four symmetrically distributed around the large hole, the large hole is actually offset to one side to make the path past the small hole wider. On the top, we see the core magnetized in a clockwise direction as a normal magnetic core. Assuming the core is fully magnetized, or saturated, a current going through the small hole is unable to magnetize a region around the small hole in either direction, because to do so, it would have to more-than-saturate the part of the transfluxor either to the right of it (if clockwise magnetization is attempted) or to the left of it (if counterclockwise magnetization is attempted). On the bottom, the core has first been magnetized in a clockwise direction by a large current through the central aperture, and then the inner part has been magnetized in a counter-clockwise direction by a smaller current through the central aperture. In this state, it is still impossible for a current through the small hole to magnetize the area around it in a clockwise direction, because then it would have to more-than-saturate the areas to both the left and the right of it, but a current through the small hole can magnetize the area around it in a counter-clockwise direction, since that reduces the magnetization of the parts of the core on both sides. This makes the transfluxor useful as an amplifying device, since a sense wire also going through the small aperture can determine if a current through a write wire through the small aperture has made any change in the core. Since the change is to a small part of the core, the rest of the core acts as a magnet that restores the change made by currents through the small aperture, so that rewriting is not required when this type of memory is read. In the middle of the diagram, we see the Biax. The write line goes through the core in one direction, and can create clockwise (blue) or counterclockwise (red) magnetization around it. In a perpendicular line, the interrogate wire goes through the core. If we pass a current through it that leads to a counterclockwise magnetization, the change to the direction of magnetization in the area between the holes is shown by the green arrows. When the magnetization in the rest of the area around the write line causes the magnetization to spring back to normal, the result looks the same, from the perspective of the interrogate line, whether the magnetization around the write line was clockwise or counterclockwise. A change takes place to the magnetic flux in the direction opposite to the green arrows, or clockwise from the perspective of the interrogate wire. But from the perspective of the write line, there is an increase in the flux in the direction of the core's original magnetization. (This is because the core was saturated when it was magnetized in the first place, so the current in the interrogate line only changed the direction of the magnetization without increasing its strength.) So the sense line goes through these cores in the same direction as the write line. The third part of the diagram shows plated-wire memory. The plated wires were the ones through which current flowed in either direction, and they magnetized the plating in the direction in which it would stay magnetized. The wires crossing them had current going through in only one direction, causing a change in magnetization that did not last. This memory worked using the same principle as thin-film memory. The following diagram: illustrates both that one could design a thin-film memory that works more like a core memory, and that, in a thin-film memory or in a core memory, if having a factor of two between a magnetic field that does not affect stored information, and one that changes the bit stored, is awkward, one can put additional wires running by each spot, or through each core, with the cost of some additional external circuitry, so that current runs through four wires going through the core being read or written, but at most through only one for any other core. The wires running diagonally belong to the sense line; while using the diagonal line for a five-way coincidence is possible, if a select line and the sense line ran in parallel, the pulse along that select line would produce a strong signal on the sense line. One of the issues preventing tunnel diodes from being used in fast microprocessors is that they only work if they are in a very narrow range of characteristics, and thus the yield of a chip with more than one tunnel diode on it is unacceptable. A design like the above might make fabrication of integrated circuit memories with zero transistors per cell (although some transistors would be needed, proportional to a constant times log(N) * sqrt(N) for an N-bit chip) practical, by allowing the parameters of the individual magnetic storage elements to vary over a wider range. The following image: shows one possible layout for very dense microchips in the future that go beyond the limits of conventional semiconductor fabrication technology. On the right and the left, as well as the top and the bottom, are shown areas produced with normal semiconductor fabrication techniques that consist of a row and column matrix, with transistors at each intersection. Row and column selection circuitry is not shown. A series of conductors, more finely spaced than is possible for the fabrication technology that produced the row and column matrix, is shown leaving it at a shallow angle. These form the rows and columns of a different type of row and column matrix that constitutes the main part of the chip. These conductors might be carbon nanotubes, for example. Since it is assumed not to be possible to use conventional lithography to fabricate transistors at the intersections of this finer row and column matrix, it is expected instead that the coincidence of current in two wires of this matrix will have an effect on the basis of the same principle used in magnetic core memories or thin-film memories. Instead of placing a continuous sheet of material between the two levels of conductors, possibly there will be a second grid of strings above the top level of conductors, going in the direction of the bottom level of conductors, to allow squares of material to be deposited on the substrate, which might have to be movable by means of microscopic mechanical actuators. Memories denser than any possible today could be made this way; but an even more exciting possibility is to use this technique to produce field-programmable gate arrays of enormous complexity. One problem is, though, that a design with only one layer of wiring would be so restricted that the disadvantages of that would outweigh the advantages of a smaller scale. However, the extra layers could all be fixed, and could be built with the help of the mechanical actuators noted above; only one layer of switchable conductors would be enough to allow any type of circuit to be built efficiently. Another problem comes up here; while it would be easy to build a transistor circuit with switchable conductors that would change from being conducting to insulating as well as the presence or absence of a metal layer would do this, or through blowing a fuse, as done with programmable but not erasable read-only memories, partial changes in conductivity due to a magnetic effect require special circuitry to utilize. Perhaps a simple circuit could use a magnetically-induced change in conductivity to determine if a fuse will be blown. Another way to improve the performance of future computers, instead of increasing the number of components that could be placed on a chip, would be to produce logic gates that work more quickly in some other way than by making them smaller. For many years, IBM pursued advanced research into the use of Josephson junctions, a type of tunnel diode that relied on superconductivity, as a promising way to make computers operate very quickly, but finally abandoned the effort in 1983. Josephson junctions have been put to commercial use in areas such as sensitive magnetic-field sensors and accurate voltage stanards. In 1995, Konstantin Likharev and several collaborators developed a new way to use the Josephson junction a form of digital logic known as Rapid Single Flux Quantum logic. The breakthrough that made more effective use of Josephson junctions in digital logic possible was to use circuits that manipulated short electrical pulses rather than producing a continuuous output of voltage to represent a logic state. An experimental microprocessor using this technology operating at a frequency of 20 GHz was built as part of a project supported by the U.S. Government to greatly extend the possible speed of computing some years ago. Because this form of logic is based upon individual pulses, the accuracy of the clock signal is normally critical; if one uses additional circuit elements, however, an asynchronous logic family is possible where both logic 0 and logic 1 involve a pulse, but a pulse sent down a different one of two wires. As it becomes possible to place more and more transistors on a single chip, the limit to the benefits that can be obtained through making a single processing unit more powerful get reached. Thus, we see chips today that place two or more processing cores on a single die. What if it were desired to put thousands of tiny computers on a single chip, and to interconnect them as though they were in a large 3-dimensional cube, each computer connected to its six nearest neighbors? The idea might be that such a design could run programs to simulate three-dimensional physical systems, such as the Earth's atmosphere, or even simulate the operation of the human brain, a three-dimensional mass of neurons whose connections are at least somewhat limited by range. Would even the vastly simpler connections of a 3-dimensional cube, being three-dimensional, overwhelm what is available on the two-dimensional surface of a microchip? No, as the diagram below illustrates: Only two layers of metallization are required to allow each cell in the staggered or rotated 4 by 4 arrays in the diagram to connect to the corresponding cell in the next array above or below, and the corresponding cell in the next array to the left or right. Connecting the cells in the arrays to be connected either along a linear path, making a third dimension, or in a two-dimensional scheme, thereby providing interconnects that have the topology of a four-dimensional hypercube does not require any additional layers, but in practice, instead of squeezing those connections between the connections already present in those two layers, in the area between the cells, the metallization layers used for wiring within the cells will be used instead, allowing both more freedom in positioning the connections and reducing capacitances. Thus, the problem will be pins to connect between chips, and not connections between processing units on chips. The way a vacuum tube works is relatively simple and easy to understand. A hot filament is placed inside a metal cylinder. The metal cylinder is called the cathode, and when it is heated, it is possible, if it is kept at a negative voltage relative to a larger metal cylinder surrounding it, the anode, for it to emit electrons. To improve the efficiency of this process, the cathode usually is covered with a special coating, possibly a mixture of barium and strontium oxides. (This is somewhat reminiscent of the Welsbach mantle on old gas lanterns: a mixture of thorium and cerium nitrates also behaves in an unusual way when heated, glowing with a bright white light where other materials, at that temperature, would only glow with a dull red light, as expected from blackbody radiation.) Between the anode and cathode, one or more grids may be placed; if the grid is at a negative electrical potential, it will repel electrons going from the cathode to the anode; if it is at a positive electrical potential, it will speed them along their way. It takes only a small amount of energy to control a large flow of energy, making a vacuum tube an electrical valve, or an amplifier. Incidentally, because both voltage and current are amplified by a vacuum tube, not just their product, power, vacuum tubes can be used in a convenient fashion in electronic circuits without having to convert between direct and alternating current; transistors also have this desirable characteristic (although a bipolar transistor can be used in configurations other than the common-emitter configuration, where one or the other form of amplification is not needed). Transistors are more complicated to understand than vacuum tubes. For reasons connected with the nature of the spherically symmetric solutions to Shrödinger's equation in an inverse-square potential, and the Pauli exclusion principle, the electrons orbiting an atomic nucleus organize themselves into "shells". This is the origin of Mendeleev's Periodic Table of the Elements. Except for hydrogen and helium, with an outermost shell with room for two electrons, the outermost shell of electrons for an atom has room for eight electrons. Every atom, in its normal state, has as many electrons as it has protons in its nucleus, since anything that is electrically charged strongly attracts things with an opposite electrical charge. But there is also a weaker effect that means that having an outer electron shell that is completely full or completely empty is also a movement "downhill" to a lower energy state for an atom. If atoms were isolated from each other, this wouldn't matter, because it is much weaker than the basic electrostatic advantage of being electrically neutral. But if two atoms come close together, one lacking an electron to complete its last shell, and the other just having one electron in its outermost shell, a minimum-energy state can be achieved by the atom with an extra electron giving it to the other atom, and then staying close to the other atom because of the attraction caused by the difference in electrical charge. This is how atoms combine into molecules; the number of extra, or lacking, electrons in the outermost shell of an atom is called its valence. In addition to a simple ionic bond as described above, atoms can also share electrons in other, more complicated, ways. In a salt crystal, each sodium atom gives up its extra electron so that each chlorine atom can have an extra electron, but then the sodium and chlorine atoms, attracting each other regardless of where any particular electron came from, form a cubic lattice of alternating sodium and chlorine atoms, so that each sodium atom is joined to six chlorine atoms by, essentially, one-sixth of an ionic bond each. Silicon, like carbon, has four atoms in its outermost shell. Carbon atoms form strong individual covalent bonds with other carbon atoms, forming structures with definite bonds like graphite or diamond. Silicon dioxide has a structure like diamond's, but with an oxygen atom between each pair of silicon atoms. Pure silicon behaves in a different way; large masses of silcon atoms simply pool all their electrons together. This is also the way most metals behave, and it is the reason pure silicon is shiny and silvery like a metal in appearance. Most metals, though, have only two electrons in their outermost shell. In a metal, therefore, the tendency is for atoms to remain aloof from the electron cloud formed by the left-over electrons; it shouldn't get so far away as to leave the metal positively charged, but it can't contribute to making the outer shell complete. (The reason so many metals have two outer electrons is that, from one element to the next, new electrons are being added to a new kind of shell, with room for ten electrons, that is buried within the atom's electronic structure, and does not get involved in chemical reactions.) In silicon, though, the four extra electrons from one atom, plus four others from its neighbor, could also make a complete shell. If a piece of silicon is peppered with atoms of an element, like arsenic, with five extra electrons, it starts behaving like a metal, because a fifth electron doesn't make sense to the silicon atoms. (Or, to be more precise, since the silicon atoms are arranged in a crystal structure deriving from the fact that they have four extra electrons, the arsenic atom needs to fit in that structure, which leaves its extra electron free.) If a piece of silicon is peppered with atoms of an element like boron, with three extra electrons, it also becomes conductive, but this time, it is a deficiency of one electron, called a "hole", that is free to move through the substance. This is because we are dealing with a piece of silicon with a very small amount of the impurity in it, also called a dopant. Thus, the three electrons of the impurity are considered in relation to the electron shell of silicon. A pure substance with three left-over electrons for each atom would tend to let the electrons move about freely, such as the metal aluminum. A semiconductor diode can be formed by applying opposing impurities to opposite ends of a small piece of silicon. If current flow makes electrons in the n-type silicon flow towards the junction, to meet holes in the p-type silicon also flowing in the opposite direction towards the junction, current can flow. A voltage in the other direction soon causes the vicinity of the junction to run out of charge carriers, both electrons and holes, so the diode's resistance increases. How can this principle be used to make an amplifier? The principle of the field-effect transistor is simple enough: if you place a thin metal sheet between insulating layers, and then have a large negative voltage on metal plates outside the insulating layers, the electrical field would force the electrons in the metal conductor in the middle to use a smaller thickness of the metal, thus increasing its resistance. In practice, using metal foils and normal capacitor construction, such a device would produce an extremely weak effect, and not serve as an amplifier. A field-effect transistor works because it doesn't just rely upon capacitance to put the squeeze on the current flow from source to drain. The junction between the gate terminal and the semiconductor material connecting the source and the drain is a reverse-biased diode. Thus, a field-effect transistor is designed so that if the voltage on the gate terminal is strong enough, the area through which current must flow to go from source to drain is deprived of charge carriers. But the original form of the transistor, the bipolar transistor, is much harder to understand. An NPN transistor consists of a strongly-doped area of n-type silicon, the emitter, separated by a very thin gap of p-type silicon, the base, from a weakly-doped area of n-type silicon, the collector. The junction between the emitter and the base behaves like the junction in a diode; electrons flow only from the emitter to the base. The base, however, is more lightly doped than the emitter, and it is a very thin region between the n-type emitter and the n-type collector. It is thin enough that electrons flowing from the emitter to the base might continue on into the collector; they will not necessarily collide with a hole in the base, which is the event needed for a flow of electrons in the emitter to collector direction to be converted to a flow of holes in the collector to emitter direction. Of course, a few of them will still collide. And an electron current in the collector combined with a hole current in the base will lead to the normal situation of a reverse-biased diode. Removing holes from the base side of the collector-base junction also means removing electrons from the collector side of that junction, so that an electron current cannot continue. But if the holes are supplied by current flow from the base terminal, instead of being taken from the limited supply near the base-collector junction, then this does not develop, and so the current into the base is amplified by the device.
http://www.quadibloc.com/comp/cp01.htm
13
51
Area of Triangles Without Right Angles If You Know Base and Height It is easy to find the area of a right-angled triangle, or any triangle where we are given the base and the height. It is simply half of b times h Area = ½bh (The Triangles page tells you more about this). Example: What is the area of this triangle? Height = h = 12 Base = b = 20 Area = ½ bh = ½ × 20 × 12 = 120 If You Know Three Sides There's also a formula to find the area of any triangle if we know the lengths of all three of its sides. This can be found on the Heron's Formula page. If You Know Two Sides and the Included Angle If we know two sides and the included angle (SAS), there is another formula (in fact three equivalent formulas) we can use. Depending on which sides and angles we know, the formula can be written in three ways: Either Area = ½ab sin C Or Area = ½bc sin A Or Area = ½ac sin B They are really the same formula, just with the sides and angle changed. Example: Find the area of this triangle: First of all we must decide what we know. We know angle C = 25º, and sides a = 7 and b = 10. So let's get going: |Start with:||Area = ½ab sin C| |Put in the values we know:||Area = ½ × 7 × 10 × sin(25º)| |Do some calculator work:||Area = 35 × 0.4226...| |Area = 14.8 to one decimal place| How to Remember Just think "abc": Area = ½ a b sin C How Does it Work? Well, we know that we can find an area if we know a base and height: Area = ½ × base × height In this triangle: Putting that together gets us: Area = ½ × (c) × (b × sin A) Which is (more simply): Area = ½bc sin A By changing the labels on the triangle we can also get: - Area = ½ab sin C - Area = ½ca sin B One more example: Example: Find How Much Land Farmer Jones owns a triangular piece of land. The length of the fence AB is 150 m. The length of the fence BC is 231 m. The angle between fence AB and fence BC is 123º. How much land does Farmer Jones own? First of all we must decide which lengths and angles we know: - AB = c = 150 m, - BC = a = 231 m, - and angle B = 123º So we use: Area = ½ca sinB |Start with:||Area = ½ca sinB| |Put in the values we know:||Area = ½ × 150 × 231 × sin(123º) m2| |Do some calculator work:||Area = 17,325 × 0.838... m2| |Area = 14,530 m2| Farmer Jones has 14,530 m2 of land
http://www.mathsisfun.com/algebra/trig-area-triangle-without-right-angle.html
13
128
The Buckingham π theorem is of central importance to dimensional analysis. This theorem describes how every physically meaningful equation involving n variables can be equivalently rewritten as an equation of n − m dimensionless parameters, where m is the number of fundamental dimensions used. Furthermore, and, most important, it provides a method for computing these dimensionless parameters from the given variables. The unit of a physical quantity and its dimension are related, but not precisely identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of meters, feet, inches, miles or micrometres; but any length always has a dimension of L, independent of what units are arbitrarily chosen to measure it. Two different units of the same physical quantity have conversion factors between them. For example: 1 in = 2.54 cm; then (2.54 cm/in) is called a conversion factor (between two representations expressed in different units of a common quantity) and is itself dimensionless and equal to one. There are no conversion factors between dimensional symbols. Dimensional symbols, such as L, form a group: There is an identity, L0 = 1; there is an inverse to L, which is 1/L or L−1, and L raised to any rational power p is a member of the group, having an inverse of L−p or 1/Lp. The operation of the group is multiplication, with the usual rules for handling exponents (Ln × Lm = Ln+m). In mechanics, the dimension of any physical quantity can be expressed in terms of base dimensions M, L and T. This is not the only possible choice, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M. The choice of the base set of dimensions is, thus, partly a convention, resulting in increased utility and familiarity. It is, however, important to note that the choice of the set of dimensions is not just a convention; for example, using length, velocity and time as base dimensions will not work well, because there is no way to obtain mass — or anything derived from it, such as force — without introducing another base dimension, and velocity, being derived from length and time, is redundant. Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of M, L, T, and Q, where Q represents quantity of electric charge. In thermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry the number of moles of substance (loosely, but not precisely, related to the number of molecules or atoms) is often involved and a dimension for this is used as well. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are important. In the most primitive form, dimensional analysis may be used to check the plausibility of physical equations: The two sides of any equation must be commensurable or have the same dimensions, i.e., the equation must be dimensionally homogeneous. As a corollary of this requirement, it follows that in a physically meaningful expression, only quantities of the same dimension can be added or subtracted. For example, the mass of a rat and the mass of a flea may be added, but the mass of a flea and the length of a rat cannot be meaningfully added. Physical quantities having different dimensions cannot be compared to one another either. For example, "3 m > 1 g" is not a meaningful expression. Only like-dimensioned quantities may be added, subtracted, compared, or equated. When unlike-dimensioned quantities appear opposite of the "+" or "−" or "=" sign, that physical equation is not plausible, which might prompt one to correct errors before proceeding to use it. When like-dimensioned quantities or unlike-dimensioned quantities are multiplied or divided, their dimensional symbols are likewise multiplied or divided. When dimensioned quantities are raised to a rational power, the same is done to the dimensional symbols attached to those quantities. Scalar arguments to exponential, trigonometric, logarithmic, and other transcendental functions must be dimensionless quantities. This requirement is clear when one observes the Taylor expansions for these functions (a sum of various powers of the function argument). For example, the logarithm of 3 kg is undefined even though the logarithm of 3 is nearly 0.477. An attempt to compute ln 3 kg would produce The value of a dimensional physical quantity Z is written as the product of a unit [Z] within the dimension and a dimensionless numerical factor, n. In a strict sense, when like-dimensioned quantities are added or subtracted or compared, these dimensioned quantities must be expressed in consistent units so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 meter added to 1 foot is a length, but it would not be correct to add 1 to 1 to get the result. A conversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed: The factor is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to identical units so that their numerical values can be added or subtracted. Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units. Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time (Pesic, 2005) in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Note that no other dimensionless product of powers involving with k, m, T, and g alone can be formed, because only g involves L . Dimensional analysis can sometimes yield strong statements about the irrelevance of some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent of g: it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way: , for some dimensionless constant κ. When faced with a case where our analysis rejects a variable (g, here) that we feel sure really belongs in a physical description of the situation, we might also consider the possibility that the rejected variable is in fact relevant, and that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here. When dimensional analysis yields a solution of problems where only one dimensionless product of powers is involved, as here, there are no unknown functions, and the solution is said to be "complete." where F is some unknown function, or, equivalently as where f is some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: The energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown function f. But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to , and so infer that . The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident. The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. As an example of the usefulness of the first refinement, suppose we wish to calculate the distance a cannon ball travels when fired with a vertical velocity component and a horizontal velocity component , assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are then , , both dimensioned as , R, the distance travelled, having dimension L, and g the downward acceleration of gravity, with dimension With these four quantities, we may conclude that the equation for the range R may be written: from which we may deduce that and , which leaves one exponent undetermined. This is to be expected since we have two fundamental quantities L and T and four parameters, with one equation. If, however, we use directed length dimensions, then will be dimensioned as , as , R as and g as . The dimensional equation becomes: and we may solve completely as , and . The increase in deductive power gained by the use of directed length dimensions is apparent. In a similar manner, it is sometimes found useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of quantity (substantial mass). For example, consider the derivation of Poiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass we may choose as the relevant variables There are three fundamental variables so the above five equations will yield two dimensionless variables which we may take to be and and we may express the dimensional equation as where C and a are undetermined constants. If we draw a distinction between inertial mass with dimensions and substantial mass with dimensions , then mass flow rate and density will use substantial mass as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written: where now only C is an undetermined constant (found to be equal to by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yield Poiseuille's law. Huntley's extension has some serious drawbacks. It does not deal well with vector equations involving the cross product, nor does it handle well the use of angles as physical variables. It also is often quite difficult to assign the L, Lx, Ly, Lz, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries? Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's addition to real problems. Angles are, by convention, considered to be dimensionless variables, and so the use of angles as physical variables in dimensional analysis can give less meaningful results. As an example, consider the projectile problem mentioned above. Suppose that, instead of the x- and y-components of the initial velocity, we had chosen the magnitude of the velocity v and the angle θ at which the projectile was fired. The angle is, by convention, considered to be dimensionless, and the magnitude of a vector has no directional quality, so that no dimensionless variable can be composed of the four variables g, v, R, and θ. Conventional analysis will correctly give the powers of g and v, but will give no information concerning the dimensionless angle θ. Siano (Siano, 1985-I, 1985-II) has suggested that the directed dimensions of Huntley be replaced by using orientational symbols 1x 1y 1z to denote vector directions, and an orientationless symbol 10. Thus, Huntley's 1x becomes L 1x with L specifying the dimension of length, and 1x specifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that 1i−1 = 1i, the following multiplication table for the orientation symbols results: Note that the orientational symbols form a group (the Klein four-group or "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem." Physical quantities that are vectors have the orientation expected: a force or a velocity in the x-direction has the orientation of 1z. For angles, consider an angle θ that lies in the z-plane. Form a right triangle in the z plane with θ being one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation 1x and the side opposite has an orientation 1y. Then, since tan(θ) = 1y/1x = θ + ... we conclude that an angle in the xy plane must have an orientation 1y/1x = 1z, which is not unreasonable. Analogous reasoning forces the conclusion that sin(θ) has orientation 1z while cos(θ) has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the form a sin(θ) + b cos(θ), where a and b are scalars. The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive a little more information about acceptable solutions of physical problems. In this approach one sets up the dimensional equation and solves it as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral. This puts it into "normal form". The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols, arriving at a solution that is more complete than the one that dimensional analysis alone gives. Often the added information is that one of the powers of a certain variable is even or odd. As an example, for the projectile problem, using orientational symbols, θ, being in the xy-plane will thus have dimension 1z and the range of the projectile R will be of the form: Dimensional homogeneity will now correctly yield a = −1 and b = 2, and orientational homogeneity requires that c be an odd integer. In fact the required function of theta will be sin(θ)cos(θ) which is a series of odd powers of θ. It is seen that the Taylor series of sin(θ) and cos(θ) are orientationally homogeneous using the above multiplication table, while expressions like cos(θ) + sin(θ) and exp(θ) are not, and are (correctly) deemed unphysical. It should be clear that the multiplication rule used for the orientational symbols is not the same as that for the cross product of two vectors. The cross product of two identical vectors is zero, while the product of two identical orientational symbols is the identity element. It has been argued by some physicists, e.g., Michael Duff , that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants: c, , and G, in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other. Just as in case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constants , c, and G (but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limit , and . In problems involving a gravitational field the latter limit should be taken such that the field stays finite. Conservative Supreme Court Justice Clarence Thomas has been interviewed by the White House as a possible choice for the court's next chief justice.(National Headliners)(Brief Article) Aug 30, 2004; Conservative Supreme Court Justice Clarence Thomas has been interviewed by the White House as a possible choice for the court's...
http://www.reference.com/browse/possible+choice
13
74
Introduction to Trigonometry Trigonometry (from Greek trigonon "triangle" + metron "measure") Want to Learn Trigonometry? Here are the basics! Follow the links for more, or go to Trigonometry Index |Trigonometry ... is all about triangles.| Right Angled Triangle A right-angled triangle (the right angle is shown by the little box in the corner) has names for each side: |__ Straight Angle||180°||π| "Sine, Cosine and Tangent" The three most common functions in trigonometry are Sine, Cosine and Tangent. You will use them a lot! They are simply one side of a triangle divided by another. For any angle "θ": Example: What is the sine of 35°? Using this triangle (lengths are only to one decimal place): sin(35°) = Opposite / Hypotenuse = 2.8/4.9 = 0.57... Sine, Cosine and Tangent are often abbreivated to sin, cos and tan. Have a try! Drag the corner around to see how different angles affect sine, cosine and tangent And you will also see why trigonometry is also about circles! Notice that the sides can be positive or negative according to the rules of cartesian coordinates. This makes the sine, cosine and tangent vary between positive and negative also. What we have just been playing with is the Unit Circle. It is just a circle with a radius of 1 with its center at 0. Because the radius is 1, it is easy to measure sine, cosine and tangent. Here you can see the sine function being made by the unit circle: You can see the nice graphs made by sine, cosine and tangent. Because the angle is rotating around and around the circle the Sine, Cosine and Tangent functions repeat once every full rotation. When you need to calculate the function for an angle larger than a full rotation of 2π (360°) just subtract as many full rotations as you need to bring it back below 2π (360°): Example: what is the cosine of 370°? 370° is greater than 360° so let us subtract 360° 370° - 360° = 10° cos(370°) = cos(10°) = 0.985 (to 3 decimal places) Likewise if the angle is less than zero, just add full rotations. Example: what is the sine of -3 radians? -3 is less than 0 so let us add 2π radians -3 + 2π = -3 + 6.283 = 3.283 radians sin(-3) = sin(3.283) = -0.141 (to 3 decimal places) A big part of Trigonometry is Solving Triangles. By "solving" I mean finding missing sides and angles. Example: Find the Missing Angle "C" It's easy to find angle C by using angles of a triangle add to 180°: So C = 180° - 76° - 34° = 70° It is also possible to find missing side lengths and more. The general rule is: If you know any 3 of the sides or angles you can find the other 3 (except for the three angles case) See Solving Triangles for more details. Other Functions (Cotangent, Secant, Cosecant) Similar to Sine, Cosine and Tangent, there are three other trigonometric functions which are made by dividing one side by another: Trigonometric and Triangle Identities The Trigonometric Identities are equations that are true for all right-angled triangles. The Triangle Identities are equations that are true for all triangles (they don't have to have a right angle). Enjoy becoming a triangle (and circle) expert!
http://www.mathsisfun.com/algebra/trigonometry.html
13